00:00:00.001 Started by upstream project "autotest-per-patch" build number 126229 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.036 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.053 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.083 Using shallow fetch with depth 1 00:00:00.083 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.083 > git --version # timeout=10 00:00:00.126 > git --version # 'git version 2.39.2' 00:00:00.126 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.175 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.175 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.939 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.951 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.965 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.965 > git config core.sparsecheckout # timeout=10 00:00:02.975 > git read-tree -mu HEAD # timeout=10 00:00:02.991 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.012 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.012 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.093 [Pipeline] Start of Pipeline 00:00:03.109 [Pipeline] library 00:00:03.110 Loading library shm_lib@master 00:00:03.110 Library shm_lib@master is cached. Copying from home. 00:00:03.127 [Pipeline] node 00:00:03.133 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.137 [Pipeline] { 00:00:03.145 [Pipeline] catchError 00:00:03.146 [Pipeline] { 00:00:03.157 [Pipeline] wrap 00:00:03.166 [Pipeline] { 00:00:03.173 [Pipeline] stage 00:00:03.174 [Pipeline] { (Prologue) 00:00:03.438 [Pipeline] sh 00:00:03.718 + logger -p user.info -t JENKINS-CI 00:00:03.735 [Pipeline] echo 00:00:03.737 Node: WFP16 00:00:03.741 [Pipeline] sh 00:00:04.032 [Pipeline] setCustomBuildProperty 00:00:04.040 [Pipeline] echo 00:00:04.041 Cleanup processes 00:00:04.044 [Pipeline] sh 00:00:04.320 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.320 3919742 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.332 [Pipeline] sh 00:00:04.608 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.608 ++ grep -v 'sudo pgrep' 00:00:04.608 ++ awk '{print $1}' 00:00:04.608 + sudo kill -9 00:00:04.608 + true 00:00:04.622 [Pipeline] cleanWs 00:00:04.631 [WS-CLEANUP] Deleting project workspace... 00:00:04.631 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.637 [WS-CLEANUP] done 00:00:04.642 [Pipeline] setCustomBuildProperty 00:00:04.656 [Pipeline] sh 00:00:04.934 + sudo git config --global --replace-all safe.directory '*' 00:00:05.007 [Pipeline] httpRequest 00:00:05.023 [Pipeline] echo 00:00:05.025 Sorcerer 10.211.164.101 is alive 00:00:05.031 [Pipeline] httpRequest 00:00:05.035 HttpMethod: GET 00:00:05.036 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.036 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.047 Response Code: HTTP/1.1 200 OK 00:00:05.047 Success: Status code 200 is in the accepted range: 200,404 00:00:05.047 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.733 [Pipeline] sh 00:00:09.017 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.033 [Pipeline] httpRequest 00:00:09.067 [Pipeline] echo 00:00:09.069 Sorcerer 10.211.164.101 is alive 00:00:09.078 [Pipeline] httpRequest 00:00:09.082 HttpMethod: GET 00:00:09.083 URL: http://10.211.164.101/packages/spdk_24018edd4c2be1d87cba51c15e3485eaa4f3e7b5.tar.gz 00:00:09.084 Sending request to url: http://10.211.164.101/packages/spdk_24018edd4c2be1d87cba51c15e3485eaa4f3e7b5.tar.gz 00:00:09.107 Response Code: HTTP/1.1 200 OK 00:00:09.107 Success: Status code 200 is in the accepted range: 200,404 00:00:09.108 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_24018edd4c2be1d87cba51c15e3485eaa4f3e7b5.tar.gz 00:00:59.494 [Pipeline] sh 00:00:59.778 + tar --no-same-owner -xf spdk_24018edd4c2be1d87cba51c15e3485eaa4f3e7b5.tar.gz 00:01:03.986 [Pipeline] sh 00:01:04.273 + git -C spdk log --oneline -n5 00:01:04.273 24018edd4 all: replace spdk_env_opts_init/spdk_env_init with _ext variant 00:01:04.273 3269bc4bc env: add spdk_env_opts_init_ext() 00:01:04.273 d9917142f env: pack and assert size for spdk_env_opts 00:01:04.273 1bd83e221 sock: add spdk_sock_get_numa_socket_id 00:01:04.273 20d0fd684 sock: add spdk_sock_get_interface_name 00:01:04.287 [Pipeline] } 00:01:04.306 [Pipeline] // stage 00:01:04.317 [Pipeline] stage 00:01:04.319 [Pipeline] { (Prepare) 00:01:04.341 [Pipeline] writeFile 00:01:04.359 [Pipeline] sh 00:01:04.639 + logger -p user.info -t JENKINS-CI 00:01:04.653 [Pipeline] sh 00:01:04.938 + logger -p user.info -t JENKINS-CI 00:01:04.955 [Pipeline] sh 00:01:05.322 + cat autorun-spdk.conf 00:01:05.322 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.322 SPDK_TEST_NVMF=1 00:01:05.322 SPDK_TEST_NVME_CLI=1 00:01:05.322 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.322 SPDK_TEST_NVMF_NICS=e810 00:01:05.322 SPDK_TEST_VFIOUSER=1 00:01:05.322 SPDK_RUN_UBSAN=1 00:01:05.322 NET_TYPE=phy 00:01:05.330 RUN_NIGHTLY=0 00:01:05.335 [Pipeline] readFile 00:01:05.366 [Pipeline] withEnv 00:01:05.369 [Pipeline] { 00:01:05.384 [Pipeline] sh 00:01:05.667 + set -ex 00:01:05.667 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:05.667 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.667 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.667 ++ SPDK_TEST_NVMF=1 00:01:05.667 ++ SPDK_TEST_NVME_CLI=1 00:01:05.667 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.667 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.667 ++ SPDK_TEST_VFIOUSER=1 00:01:05.667 ++ SPDK_RUN_UBSAN=1 00:01:05.667 ++ NET_TYPE=phy 00:01:05.667 ++ RUN_NIGHTLY=0 00:01:05.667 + case $SPDK_TEST_NVMF_NICS in 00:01:05.667 + DRIVERS=ice 00:01:05.667 + [[ tcp == \r\d\m\a ]] 00:01:05.667 + [[ -n ice ]] 00:01:05.667 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:05.667 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.237 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.237 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.237 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.237 + true 00:01:12.237 + for D in $DRIVERS 00:01:12.237 + sudo modprobe ice 00:01:12.237 + exit 0 00:01:12.246 [Pipeline] } 00:01:12.265 [Pipeline] // withEnv 00:01:12.274 [Pipeline] } 00:01:12.291 [Pipeline] // stage 00:01:12.301 [Pipeline] catchError 00:01:12.303 [Pipeline] { 00:01:12.318 [Pipeline] timeout 00:01:12.318 Timeout set to expire in 50 min 00:01:12.320 [Pipeline] { 00:01:12.335 [Pipeline] stage 00:01:12.337 [Pipeline] { (Tests) 00:01:12.349 [Pipeline] sh 00:01:12.631 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.631 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.631 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.631 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.631 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.631 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.631 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.631 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.631 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.631 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.631 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.631 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.631 + source /etc/os-release 00:01:12.631 ++ NAME='Fedora Linux' 00:01:12.631 ++ VERSION='38 (Cloud Edition)' 00:01:12.631 ++ ID=fedora 00:01:12.631 ++ VERSION_ID=38 00:01:12.631 ++ VERSION_CODENAME= 00:01:12.631 ++ PLATFORM_ID=platform:f38 00:01:12.631 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.631 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.631 ++ LOGO=fedora-logo-icon 00:01:12.631 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.631 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.631 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.631 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.631 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.631 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.631 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.631 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.631 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.631 ++ SUPPORT_END=2024-05-14 00:01:12.631 ++ VARIANT='Cloud Edition' 00:01:12.631 ++ VARIANT_ID=cloud 00:01:12.631 + uname -a 00:01:12.631 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.631 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.167 Hugepages 00:01:15.167 node hugesize free / total 00:01:15.167 node0 1048576kB 0 / 0 00:01:15.167 node0 2048kB 0 / 0 00:01:15.167 node1 1048576kB 0 / 0 00:01:15.167 node1 2048kB 0 / 0 00:01:15.167 00:01:15.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.167 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.167 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.168 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.168 + rm -f /tmp/spdk-ld-path 00:01:15.168 + source autorun-spdk.conf 00:01:15.168 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.168 ++ SPDK_TEST_NVMF=1 00:01:15.168 ++ SPDK_TEST_NVME_CLI=1 00:01:15.168 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.168 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.168 ++ SPDK_TEST_VFIOUSER=1 00:01:15.168 ++ SPDK_RUN_UBSAN=1 00:01:15.168 ++ NET_TYPE=phy 00:01:15.168 ++ RUN_NIGHTLY=0 00:01:15.168 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.168 + [[ -n '' ]] 00:01:15.168 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.168 + for M in /var/spdk/build-*-manifest.txt 00:01:15.168 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.168 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.168 + for M in /var/spdk/build-*-manifest.txt 00:01:15.168 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.168 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.168 ++ uname 00:01:15.168 + [[ Linux == \L\i\n\u\x ]] 00:01:15.168 + sudo dmesg -T 00:01:15.168 + sudo dmesg --clear 00:01:15.168 + dmesg_pid=3920678 00:01:15.168 + [[ Fedora Linux == FreeBSD ]] 00:01:15.168 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.168 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.168 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.168 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.168 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.168 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.168 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.168 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.168 + sudo dmesg -Tw 00:01:15.168 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.168 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.168 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.168 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.168 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.168 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.168 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.168 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.168 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.168 Test configuration: 00:01:15.168 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.168 SPDK_TEST_NVMF=1 00:01:15.168 SPDK_TEST_NVME_CLI=1 00:01:15.168 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.168 SPDK_TEST_NVMF_NICS=e810 00:01:15.168 SPDK_TEST_VFIOUSER=1 00:01:15.168 SPDK_RUN_UBSAN=1 00:01:15.168 NET_TYPE=phy 00:01:15.427 RUN_NIGHTLY=0 19:59:40 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.427 19:59:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.427 19:59:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.427 19:59:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.427 19:59:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.427 19:59:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.427 19:59:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.427 19:59:40 -- paths/export.sh@5 -- $ export PATH 00:01:15.427 19:59:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.427 19:59:40 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.427 19:59:40 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:15.427 19:59:40 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721066380.XXXXXX 00:01:15.427 19:59:40 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721066380.iVcgt7 00:01:15.427 19:59:40 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:15.427 19:59:40 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:15.427 19:59:40 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.427 19:59:40 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.427 19:59:40 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.427 19:59:40 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:15.427 19:59:40 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:15.427 19:59:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.427 19:59:40 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.427 19:59:40 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:15.427 19:59:40 -- pm/common@17 -- $ local monitor 00:01:15.427 19:59:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.427 19:59:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.427 19:59:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.427 19:59:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.427 19:59:40 -- pm/common@25 -- $ sleep 1 00:01:15.427 19:59:40 -- pm/common@21 -- $ date +%s 00:01:15.427 19:59:40 -- pm/common@21 -- $ date +%s 00:01:15.427 19:59:40 -- pm/common@21 -- $ date +%s 00:01:15.427 19:59:40 -- pm/common@21 -- $ date +%s 00:01:15.427 19:59:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066380 00:01:15.428 19:59:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066380 00:01:15.428 19:59:40 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066380 00:01:15.428 19:59:40 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721066380 00:01:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066380_collect-vmstat.pm.log 00:01:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066380_collect-cpu-load.pm.log 00:01:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066380_collect-cpu-temp.pm.log 00:01:15.428 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721066380_collect-bmc-pm.bmc.pm.log 00:01:16.363 19:59:41 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:16.363 19:59:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.363 19:59:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.363 19:59:41 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.363 19:59:41 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.363 Mon Jul 15 05:59:41 PM UTC 2024 00:01:16.363 19:59:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.363 v24.09-pre-225-g24018edd4 00:01:16.363 19:59:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.363 19:59:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.363 19:59:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.363 19:59:41 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:16.363 19:59:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.363 19:59:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.363 ************************************ 00:01:16.363 START TEST ubsan 00:01:16.363 ************************************ 00:01:16.363 19:59:41 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:16.363 using ubsan 00:01:16.363 00:01:16.363 real 0m0.000s 00:01:16.363 user 0m0.000s 00:01:16.363 sys 0m0.000s 00:01:16.363 19:59:41 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:16.363 19:59:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.363 ************************************ 00:01:16.363 END TEST ubsan 00:01:16.363 ************************************ 00:01:16.363 19:59:41 -- common/autotest_common.sh@1142 -- $ return 0 00:01:16.363 19:59:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.363 19:59:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.364 19:59:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.364 19:59:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.364 19:59:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.364 19:59:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.364 19:59:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.364 19:59:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.364 19:59:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:16.621 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.621 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.879 Using 'verbs' RDMA provider 00:01:30.032 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:42.244 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:42.502 Creating mk/config.mk...done. 00:01:42.502 Creating mk/cc.flags.mk...done. 00:01:42.502 Type 'make' to build. 00:01:42.502 20:00:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:42.502 20:00:07 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:42.502 20:00:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:42.502 20:00:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.502 ************************************ 00:01:42.502 START TEST make 00:01:42.502 ************************************ 00:01:42.502 20:00:07 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:42.762 make[1]: Nothing to be done for 'all'. 00:01:44.141 The Meson build system 00:01:44.141 Version: 1.3.1 00:01:44.141 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:44.141 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.141 Build type: native build 00:01:44.141 Project name: libvfio-user 00:01:44.141 Project version: 0.0.1 00:01:44.141 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:44.141 C linker for the host machine: cc ld.bfd 2.39-16 00:01:44.141 Host machine cpu family: x86_64 00:01:44.141 Host machine cpu: x86_64 00:01:44.141 Run-time dependency threads found: YES 00:01:44.141 Library dl found: YES 00:01:44.141 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:44.141 Run-time dependency json-c found: YES 0.17 00:01:44.141 Run-time dependency cmocka found: YES 1.1.7 00:01:44.141 Program pytest-3 found: NO 00:01:44.141 Program flake8 found: NO 00:01:44.141 Program misspell-fixer found: NO 00:01:44.141 Program restructuredtext-lint found: NO 00:01:44.141 Program valgrind found: YES (/usr/bin/valgrind) 00:01:44.141 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.141 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.141 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.141 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.141 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:44.141 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:44.141 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:44.141 Build targets in project: 8 00:01:44.141 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:44.141 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:44.141 00:01:44.141 libvfio-user 0.0.1 00:01:44.141 00:01:44.141 User defined options 00:01:44.141 buildtype : debug 00:01:44.141 default_library: shared 00:01:44.141 libdir : /usr/local/lib 00:01:44.141 00:01:44.141 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.720 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.720 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.980 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.980 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.980 [4/37] Compiling C object samples/null.p/null.c.o 00:01:44.980 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.980 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.980 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.980 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.980 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.980 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.980 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.980 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.980 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.980 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.980 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.980 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.980 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.980 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.980 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.980 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.980 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.980 [22/37] Compiling C object samples/server.p/server.c.o 00:01:44.980 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.980 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.980 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.980 [26/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.980 [27/37] Compiling C object samples/client.p/client.c.o 00:01:44.981 [28/37] Linking target test/unit_tests 00:01:44.981 [29/37] Linking target samples/client 00:01:44.981 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:45.239 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:45.239 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:45.239 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:45.239 [34/37] Linking target samples/server 00:01:45.239 [35/37] Linking target samples/gpio-pci-idio-16 00:01:45.239 [36/37] Linking target samples/null 00:01:45.239 [37/37] Linking target samples/lspci 00:01:45.239 INFO: autodetecting backend as ninja 00:01:45.239 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.498 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.756 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.756 ninja: no work to do. 00:01:52.325 The Meson build system 00:01:52.325 Version: 1.3.1 00:01:52.325 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:52.325 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:52.325 Build type: native build 00:01:52.325 Program cat found: YES (/usr/bin/cat) 00:01:52.325 Project name: DPDK 00:01:52.325 Project version: 24.03.0 00:01:52.325 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.325 C linker for the host machine: cc ld.bfd 2.39-16 00:01:52.325 Host machine cpu family: x86_64 00:01:52.325 Host machine cpu: x86_64 00:01:52.325 Message: ## Building in Developer Mode ## 00:01:52.325 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.325 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:52.325 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.325 Program python3 found: YES (/usr/bin/python3) 00:01:52.325 Program cat found: YES (/usr/bin/cat) 00:01:52.325 Compiler for C supports arguments -march=native: YES 00:01:52.325 Checking for size of "void *" : 8 00:01:52.325 Checking for size of "void *" : 8 (cached) 00:01:52.325 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:52.325 Library m found: YES 00:01:52.325 Library numa found: YES 00:01:52.325 Has header "numaif.h" : YES 00:01:52.325 Library fdt found: NO 00:01:52.325 Library execinfo found: NO 00:01:52.325 Has header "execinfo.h" : YES 00:01:52.325 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.325 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.325 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.325 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.325 Run-time dependency openssl found: YES 3.0.9 00:01:52.325 Run-time dependency libpcap found: YES 1.10.4 00:01:52.325 Has header "pcap.h" with dependency libpcap: YES 00:01:52.325 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.325 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.325 Compiler for C supports arguments -Wformat: YES 00:01:52.325 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.325 Compiler for C supports arguments -Wformat-security: NO 00:01:52.325 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.325 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.325 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.325 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.325 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.325 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.325 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.325 Compiler for C supports arguments -Wundef: YES 00:01:52.325 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.325 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.325 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.325 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.325 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.325 Program objdump found: YES (/usr/bin/objdump) 00:01:52.325 Compiler for C supports arguments -mavx512f: YES 00:01:52.325 Checking if "AVX512 checking" compiles: YES 00:01:52.325 Fetching value of define "__SSE4_2__" : 1 00:01:52.325 Fetching value of define "__AES__" : 1 00:01:52.325 Fetching value of define "__AVX__" : 1 00:01:52.325 Fetching value of define "__AVX2__" : 1 00:01:52.325 Fetching value of define "__AVX512BW__" : 1 00:01:52.325 Fetching value of define "__AVX512CD__" : 1 00:01:52.325 Fetching value of define "__AVX512DQ__" : 1 00:01:52.325 Fetching value of define "__AVX512F__" : 1 00:01:52.325 Fetching value of define "__AVX512VL__" : 1 00:01:52.325 Fetching value of define "__PCLMUL__" : 1 00:01:52.325 Fetching value of define "__RDRND__" : 1 00:01:52.325 Fetching value of define "__RDSEED__" : 1 00:01:52.325 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:52.325 Fetching value of define "__znver1__" : (undefined) 00:01:52.325 Fetching value of define "__znver2__" : (undefined) 00:01:52.325 Fetching value of define "__znver3__" : (undefined) 00:01:52.325 Fetching value of define "__znver4__" : (undefined) 00:01:52.325 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.325 Message: lib/log: Defining dependency "log" 00:01:52.325 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.325 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.325 Checking for function "getentropy" : NO 00:01:52.325 Message: lib/eal: Defining dependency "eal" 00:01:52.325 Message: lib/ring: Defining dependency "ring" 00:01:52.325 Message: lib/rcu: Defining dependency "rcu" 00:01:52.325 Message: lib/mempool: Defining dependency "mempool" 00:01:52.325 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.325 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.325 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.325 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.326 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.326 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:52.326 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:52.326 Compiler for C supports arguments -mpclmul: YES 00:01:52.326 Compiler for C supports arguments -maes: YES 00:01:52.326 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.326 Compiler for C supports arguments -mavx512bw: YES 00:01:52.326 Compiler for C supports arguments -mavx512dq: YES 00:01:52.326 Compiler for C supports arguments -mavx512vl: YES 00:01:52.326 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.326 Compiler for C supports arguments -mavx2: YES 00:01:52.326 Compiler for C supports arguments -mavx: YES 00:01:52.326 Message: lib/net: Defining dependency "net" 00:01:52.326 Message: lib/meter: Defining dependency "meter" 00:01:52.326 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.326 Message: lib/pci: Defining dependency "pci" 00:01:52.326 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.326 Message: lib/hash: Defining dependency "hash" 00:01:52.326 Message: lib/timer: Defining dependency "timer" 00:01:52.326 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.326 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.326 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.326 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.326 Message: lib/power: Defining dependency "power" 00:01:52.326 Message: lib/reorder: Defining dependency "reorder" 00:01:52.326 Message: lib/security: Defining dependency "security" 00:01:52.326 Has header "linux/userfaultfd.h" : YES 00:01:52.326 Has header "linux/vduse.h" : YES 00:01:52.326 Message: lib/vhost: Defining dependency "vhost" 00:01:52.326 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.326 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.326 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.326 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.326 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:52.326 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:52.326 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:52.326 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:52.326 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:52.326 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:52.326 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.326 Configuring doxy-api-html.conf using configuration 00:01:52.326 Configuring doxy-api-man.conf using configuration 00:01:52.326 Program mandb found: YES (/usr/bin/mandb) 00:01:52.326 Program sphinx-build found: NO 00:01:52.326 Configuring rte_build_config.h using configuration 00:01:52.326 Message: 00:01:52.326 ================= 00:01:52.326 Applications Enabled 00:01:52.326 ================= 00:01:52.326 00:01:52.326 apps: 00:01:52.326 00:01:52.326 00:01:52.326 Message: 00:01:52.326 ================= 00:01:52.326 Libraries Enabled 00:01:52.326 ================= 00:01:52.326 00:01:52.326 libs: 00:01:52.326 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:52.326 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:52.326 cryptodev, dmadev, power, reorder, security, vhost, 00:01:52.326 00:01:52.326 Message: 00:01:52.326 =============== 00:01:52.326 Drivers Enabled 00:01:52.326 =============== 00:01:52.326 00:01:52.326 common: 00:01:52.326 00:01:52.326 bus: 00:01:52.326 pci, vdev, 00:01:52.326 mempool: 00:01:52.326 ring, 00:01:52.326 dma: 00:01:52.326 00:01:52.326 net: 00:01:52.326 00:01:52.326 crypto: 00:01:52.326 00:01:52.326 compress: 00:01:52.326 00:01:52.326 vdpa: 00:01:52.326 00:01:52.326 00:01:52.326 Message: 00:01:52.326 ================= 00:01:52.326 Content Skipped 00:01:52.326 ================= 00:01:52.326 00:01:52.326 apps: 00:01:52.326 dumpcap: explicitly disabled via build config 00:01:52.326 graph: explicitly disabled via build config 00:01:52.326 pdump: explicitly disabled via build config 00:01:52.326 proc-info: explicitly disabled via build config 00:01:52.326 test-acl: explicitly disabled via build config 00:01:52.326 test-bbdev: explicitly disabled via build config 00:01:52.326 test-cmdline: explicitly disabled via build config 00:01:52.326 test-compress-perf: explicitly disabled via build config 00:01:52.326 test-crypto-perf: explicitly disabled via build config 00:01:52.326 test-dma-perf: explicitly disabled via build config 00:01:52.326 test-eventdev: explicitly disabled via build config 00:01:52.326 test-fib: explicitly disabled via build config 00:01:52.326 test-flow-perf: explicitly disabled via build config 00:01:52.326 test-gpudev: explicitly disabled via build config 00:01:52.326 test-mldev: explicitly disabled via build config 00:01:52.326 test-pipeline: explicitly disabled via build config 00:01:52.326 test-pmd: explicitly disabled via build config 00:01:52.326 test-regex: explicitly disabled via build config 00:01:52.326 test-sad: explicitly disabled via build config 00:01:52.326 test-security-perf: explicitly disabled via build config 00:01:52.326 00:01:52.326 libs: 00:01:52.326 argparse: explicitly disabled via build config 00:01:52.326 metrics: explicitly disabled via build config 00:01:52.326 acl: explicitly disabled via build config 00:01:52.326 bbdev: explicitly disabled via build config 00:01:52.326 bitratestats: explicitly disabled via build config 00:01:52.326 bpf: explicitly disabled via build config 00:01:52.326 cfgfile: explicitly disabled via build config 00:01:52.326 distributor: explicitly disabled via build config 00:01:52.326 efd: explicitly disabled via build config 00:01:52.326 eventdev: explicitly disabled via build config 00:01:52.326 dispatcher: explicitly disabled via build config 00:01:52.326 gpudev: explicitly disabled via build config 00:01:52.326 gro: explicitly disabled via build config 00:01:52.326 gso: explicitly disabled via build config 00:01:52.326 ip_frag: explicitly disabled via build config 00:01:52.326 jobstats: explicitly disabled via build config 00:01:52.326 latencystats: explicitly disabled via build config 00:01:52.326 lpm: explicitly disabled via build config 00:01:52.326 member: explicitly disabled via build config 00:01:52.326 pcapng: explicitly disabled via build config 00:01:52.326 rawdev: explicitly disabled via build config 00:01:52.326 regexdev: explicitly disabled via build config 00:01:52.326 mldev: explicitly disabled via build config 00:01:52.326 rib: explicitly disabled via build config 00:01:52.326 sched: explicitly disabled via build config 00:01:52.326 stack: explicitly disabled via build config 00:01:52.326 ipsec: explicitly disabled via build config 00:01:52.326 pdcp: explicitly disabled via build config 00:01:52.326 fib: explicitly disabled via build config 00:01:52.326 port: explicitly disabled via build config 00:01:52.326 pdump: explicitly disabled via build config 00:01:52.326 table: explicitly disabled via build config 00:01:52.326 pipeline: explicitly disabled via build config 00:01:52.326 graph: explicitly disabled via build config 00:01:52.326 node: explicitly disabled via build config 00:01:52.326 00:01:52.326 drivers: 00:01:52.326 common/cpt: not in enabled drivers build config 00:01:52.326 common/dpaax: not in enabled drivers build config 00:01:52.326 common/iavf: not in enabled drivers build config 00:01:52.326 common/idpf: not in enabled drivers build config 00:01:52.326 common/ionic: not in enabled drivers build config 00:01:52.326 common/mvep: not in enabled drivers build config 00:01:52.326 common/octeontx: not in enabled drivers build config 00:01:52.326 bus/auxiliary: not in enabled drivers build config 00:01:52.326 bus/cdx: not in enabled drivers build config 00:01:52.326 bus/dpaa: not in enabled drivers build config 00:01:52.326 bus/fslmc: not in enabled drivers build config 00:01:52.326 bus/ifpga: not in enabled drivers build config 00:01:52.326 bus/platform: not in enabled drivers build config 00:01:52.326 bus/uacce: not in enabled drivers build config 00:01:52.326 bus/vmbus: not in enabled drivers build config 00:01:52.326 common/cnxk: not in enabled drivers build config 00:01:52.326 common/mlx5: not in enabled drivers build config 00:01:52.326 common/nfp: not in enabled drivers build config 00:01:52.326 common/nitrox: not in enabled drivers build config 00:01:52.326 common/qat: not in enabled drivers build config 00:01:52.326 common/sfc_efx: not in enabled drivers build config 00:01:52.326 mempool/bucket: not in enabled drivers build config 00:01:52.326 mempool/cnxk: not in enabled drivers build config 00:01:52.326 mempool/dpaa: not in enabled drivers build config 00:01:52.326 mempool/dpaa2: not in enabled drivers build config 00:01:52.326 mempool/octeontx: not in enabled drivers build config 00:01:52.326 mempool/stack: not in enabled drivers build config 00:01:52.326 dma/cnxk: not in enabled drivers build config 00:01:52.326 dma/dpaa: not in enabled drivers build config 00:01:52.326 dma/dpaa2: not in enabled drivers build config 00:01:52.326 dma/hisilicon: not in enabled drivers build config 00:01:52.326 dma/idxd: not in enabled drivers build config 00:01:52.326 dma/ioat: not in enabled drivers build config 00:01:52.326 dma/skeleton: not in enabled drivers build config 00:01:52.326 net/af_packet: not in enabled drivers build config 00:01:52.326 net/af_xdp: not in enabled drivers build config 00:01:52.326 net/ark: not in enabled drivers build config 00:01:52.326 net/atlantic: not in enabled drivers build config 00:01:52.326 net/avp: not in enabled drivers build config 00:01:52.326 net/axgbe: not in enabled drivers build config 00:01:52.326 net/bnx2x: not in enabled drivers build config 00:01:52.326 net/bnxt: not in enabled drivers build config 00:01:52.326 net/bonding: not in enabled drivers build config 00:01:52.326 net/cnxk: not in enabled drivers build config 00:01:52.326 net/cpfl: not in enabled drivers build config 00:01:52.326 net/cxgbe: not in enabled drivers build config 00:01:52.326 net/dpaa: not in enabled drivers build config 00:01:52.326 net/dpaa2: not in enabled drivers build config 00:01:52.326 net/e1000: not in enabled drivers build config 00:01:52.326 net/ena: not in enabled drivers build config 00:01:52.326 net/enetc: not in enabled drivers build config 00:01:52.326 net/enetfec: not in enabled drivers build config 00:01:52.326 net/enic: not in enabled drivers build config 00:01:52.326 net/failsafe: not in enabled drivers build config 00:01:52.326 net/fm10k: not in enabled drivers build config 00:01:52.326 net/gve: not in enabled drivers build config 00:01:52.326 net/hinic: not in enabled drivers build config 00:01:52.326 net/hns3: not in enabled drivers build config 00:01:52.326 net/i40e: not in enabled drivers build config 00:01:52.326 net/iavf: not in enabled drivers build config 00:01:52.326 net/ice: not in enabled drivers build config 00:01:52.326 net/idpf: not in enabled drivers build config 00:01:52.326 net/igc: not in enabled drivers build config 00:01:52.326 net/ionic: not in enabled drivers build config 00:01:52.327 net/ipn3ke: not in enabled drivers build config 00:01:52.327 net/ixgbe: not in enabled drivers build config 00:01:52.327 net/mana: not in enabled drivers build config 00:01:52.327 net/memif: not in enabled drivers build config 00:01:52.327 net/mlx4: not in enabled drivers build config 00:01:52.327 net/mlx5: not in enabled drivers build config 00:01:52.327 net/mvneta: not in enabled drivers build config 00:01:52.327 net/mvpp2: not in enabled drivers build config 00:01:52.327 net/netvsc: not in enabled drivers build config 00:01:52.327 net/nfb: not in enabled drivers build config 00:01:52.327 net/nfp: not in enabled drivers build config 00:01:52.327 net/ngbe: not in enabled drivers build config 00:01:52.327 net/null: not in enabled drivers build config 00:01:52.327 net/octeontx: not in enabled drivers build config 00:01:52.327 net/octeon_ep: not in enabled drivers build config 00:01:52.327 net/pcap: not in enabled drivers build config 00:01:52.327 net/pfe: not in enabled drivers build config 00:01:52.327 net/qede: not in enabled drivers build config 00:01:52.327 net/ring: not in enabled drivers build config 00:01:52.327 net/sfc: not in enabled drivers build config 00:01:52.327 net/softnic: not in enabled drivers build config 00:01:52.327 net/tap: not in enabled drivers build config 00:01:52.327 net/thunderx: not in enabled drivers build config 00:01:52.327 net/txgbe: not in enabled drivers build config 00:01:52.327 net/vdev_netvsc: not in enabled drivers build config 00:01:52.327 net/vhost: not in enabled drivers build config 00:01:52.327 net/virtio: not in enabled drivers build config 00:01:52.327 net/vmxnet3: not in enabled drivers build config 00:01:52.327 raw/*: missing internal dependency, "rawdev" 00:01:52.327 crypto/armv8: not in enabled drivers build config 00:01:52.327 crypto/bcmfs: not in enabled drivers build config 00:01:52.327 crypto/caam_jr: not in enabled drivers build config 00:01:52.327 crypto/ccp: not in enabled drivers build config 00:01:52.327 crypto/cnxk: not in enabled drivers build config 00:01:52.327 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.327 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.327 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.327 crypto/mlx5: not in enabled drivers build config 00:01:52.327 crypto/mvsam: not in enabled drivers build config 00:01:52.327 crypto/nitrox: not in enabled drivers build config 00:01:52.327 crypto/null: not in enabled drivers build config 00:01:52.327 crypto/octeontx: not in enabled drivers build config 00:01:52.327 crypto/openssl: not in enabled drivers build config 00:01:52.327 crypto/scheduler: not in enabled drivers build config 00:01:52.327 crypto/uadk: not in enabled drivers build config 00:01:52.327 crypto/virtio: not in enabled drivers build config 00:01:52.327 compress/isal: not in enabled drivers build config 00:01:52.327 compress/mlx5: not in enabled drivers build config 00:01:52.327 compress/nitrox: not in enabled drivers build config 00:01:52.327 compress/octeontx: not in enabled drivers build config 00:01:52.327 compress/zlib: not in enabled drivers build config 00:01:52.327 regex/*: missing internal dependency, "regexdev" 00:01:52.327 ml/*: missing internal dependency, "mldev" 00:01:52.327 vdpa/ifc: not in enabled drivers build config 00:01:52.327 vdpa/mlx5: not in enabled drivers build config 00:01:52.327 vdpa/nfp: not in enabled drivers build config 00:01:52.327 vdpa/sfc: not in enabled drivers build config 00:01:52.327 event/*: missing internal dependency, "eventdev" 00:01:52.327 baseband/*: missing internal dependency, "bbdev" 00:01:52.327 gpu/*: missing internal dependency, "gpudev" 00:01:52.327 00:01:52.327 00:01:52.327 Build targets in project: 85 00:01:52.327 00:01:52.327 DPDK 24.03.0 00:01:52.327 00:01:52.327 User defined options 00:01:52.327 buildtype : debug 00:01:52.327 default_library : shared 00:01:52.327 libdir : lib 00:01:52.327 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:52.327 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:52.327 c_link_args : 00:01:52.327 cpu_instruction_set: native 00:01:52.327 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:52.327 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:52.327 enable_docs : false 00:01:52.327 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:52.327 enable_kmods : false 00:01:52.327 max_lcores : 128 00:01:52.327 tests : false 00:01:52.327 00:01:52.327 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.327 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.327 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.327 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.327 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.327 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.327 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.327 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.327 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.591 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.591 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.591 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.591 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.591 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.591 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.591 [14/268] Linking static target lib/librte_kvargs.a 00:01:52.591 [15/268] Linking static target lib/librte_log.a 00:01:52.591 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.591 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.591 [18/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.591 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.591 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.591 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.591 [22/268] Linking static target lib/librte_pci.a 00:01:52.591 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.591 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.591 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.591 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.591 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.591 [28/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.591 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.591 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.591 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.591 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.855 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.855 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.855 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.855 [36/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.855 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.855 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.855 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.855 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.855 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.855 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.855 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.114 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.114 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.114 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.114 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.114 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.114 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.114 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.114 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.114 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.114 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.114 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.114 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.114 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.114 [57/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.114 [58/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.114 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.114 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.114 [61/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:53.114 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.114 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.114 [64/268] Linking static target lib/librte_meter.a 00:01:53.114 [65/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.114 [66/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.114 [67/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:53.114 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.114 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.114 [70/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.114 [71/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.114 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.114 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.114 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.114 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.114 [76/268] Linking static target lib/librte_ring.a 00:01:53.115 [77/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.115 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.115 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.115 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.115 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.115 [82/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.115 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.115 [84/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.115 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.115 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.115 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.115 [88/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.115 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.115 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.115 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.115 [92/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.115 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.115 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.115 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.115 [96/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.115 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.115 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.115 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.115 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.115 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.115 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.115 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.115 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.115 [105/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.115 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.115 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.115 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.115 [109/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.115 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.115 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.115 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.115 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.115 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.115 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.115 [116/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.373 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.373 [118/268] Linking static target lib/librte_cmdline.a 00:01:53.373 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.373 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.373 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.373 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.373 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.373 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.373 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.373 [126/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.373 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.373 [128/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.373 [129/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.373 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.373 [131/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.373 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.373 [133/268] Linking static target lib/librte_timer.a 00:01:53.373 [134/268] Linking static target lib/librte_eal.a 00:01:53.373 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.373 [136/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.373 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.373 [138/268] Linking static target lib/librte_rcu.a 00:01:53.373 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.373 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.373 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.373 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.373 [143/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.373 [144/268] Linking target lib/librte_log.so.24.1 00:01:53.373 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.373 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.373 [147/268] Linking static target lib/librte_mempool.a 00:01:53.373 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.373 [149/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.373 [150/268] Linking static target lib/librte_compressdev.a 00:01:53.373 [151/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.373 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.373 [153/268] Linking static target lib/librte_dmadev.a 00:01:53.373 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.373 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.373 [156/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.632 [157/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:53.632 [158/268] Linking static target lib/librte_telemetry.a 00:01:53.632 [159/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.632 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.632 [161/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.632 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.632 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:53.632 [164/268] Linking target lib/librte_kvargs.so.24.1 00:01:53.632 [165/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.632 [166/268] Linking static target lib/librte_net.a 00:01:53.632 [167/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.632 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.632 [169/268] Linking static target lib/librte_mbuf.a 00:01:53.632 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.632 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.632 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.632 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.632 [174/268] Linking static target lib/librte_power.a 00:01:53.632 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.632 [176/268] Linking static target lib/librte_hash.a 00:01:53.632 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.632 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.632 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.632 [180/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.632 [181/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:53.632 [182/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.632 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.632 [184/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.632 [185/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.632 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.632 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.899 [188/268] Linking static target lib/librte_reorder.a 00:01:53.899 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.899 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.899 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.899 [192/268] Linking static target drivers/librte_bus_vdev.a 00:01:53.899 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.899 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.899 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.899 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.899 [197/268] Linking static target lib/librte_security.a 00:01:53.899 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.899 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.899 [200/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.899 [201/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.899 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.899 [203/268] Linking static target lib/librte_cryptodev.a 00:01:53.899 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.899 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.899 [206/268] Linking static target drivers/librte_bus_pci.a 00:01:54.159 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.159 [208/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.159 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.159 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.159 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:54.159 [212/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.159 [213/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.159 [214/268] Linking target lib/librte_telemetry.so.24.1 00:01:54.159 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.159 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.159 [217/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [218/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:54.417 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.417 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.417 [223/268] Linking static target lib/librte_ethdev.a 00:01:54.417 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.676 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.676 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.676 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.055 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.055 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.055 [230/268] Linking static target lib/librte_vhost.a 00:01:57.959 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.227 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.489 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.489 [234/268] Linking target lib/librte_eal.so.24.1 00:02:03.761 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:03.761 [236/268] Linking target lib/librte_ring.so.24.1 00:02:03.761 [237/268] Linking target lib/librte_meter.so.24.1 00:02:03.761 [238/268] Linking target lib/librte_timer.so.24.1 00:02:03.761 [239/268] Linking target lib/librte_pci.so.24.1 00:02:03.761 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:03.761 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:03.761 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:03.761 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:03.761 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:03.761 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:04.061 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:04.061 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:04.061 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:04.061 [249/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:04.061 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:04.061 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:04.061 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:04.061 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:04.320 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:04.320 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:04.320 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:04.320 [257/268] Linking target lib/librte_net.so.24.1 00:02:04.320 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:04.320 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:04.320 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:04.320 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:04.579 [262/268] Linking target lib/librte_hash.so.24.1 00:02:04.579 [263/268] Linking target lib/librte_security.so.24.1 00:02:04.579 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:04.579 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:04.579 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:04.579 [267/268] Linking target lib/librte_power.so.24.1 00:02:04.579 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:04.579 INFO: autodetecting backend as ninja 00:02:04.579 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:05.955 CC lib/log/log.o 00:02:05.955 CC lib/log/log_flags.o 00:02:05.955 CC lib/log/log_deprecated.o 00:02:05.955 CC lib/ut_mock/mock.o 00:02:05.955 CC lib/ut/ut.o 00:02:05.955 LIB libspdk_ut_mock.a 00:02:05.955 LIB libspdk_log.a 00:02:05.955 LIB libspdk_ut.a 00:02:05.955 SO libspdk_ut_mock.so.6.0 00:02:05.955 SO libspdk_log.so.7.0 00:02:06.213 SO libspdk_ut.so.2.0 00:02:06.213 SYMLINK libspdk_ut_mock.so 00:02:06.213 SYMLINK libspdk_ut.so 00:02:06.213 SYMLINK libspdk_log.so 00:02:06.472 CXX lib/trace_parser/trace.o 00:02:06.472 CC lib/util/base64.o 00:02:06.472 CC lib/util/bit_array.o 00:02:06.472 CC lib/dma/dma.o 00:02:06.472 CC lib/ioat/ioat.o 00:02:06.472 CC lib/util/cpuset.o 00:02:06.472 CC lib/util/crc16.o 00:02:06.472 CC lib/util/crc32.o 00:02:06.472 CC lib/util/crc32c.o 00:02:06.472 CC lib/util/crc32_ieee.o 00:02:06.472 CC lib/util/crc64.o 00:02:06.472 CC lib/util/dif.o 00:02:06.472 CC lib/util/fd.o 00:02:06.472 CC lib/util/fd_group.o 00:02:06.472 CC lib/util/file.o 00:02:06.472 CC lib/util/hexlify.o 00:02:06.472 CC lib/util/iov.o 00:02:06.472 CC lib/util/math.o 00:02:06.472 CC lib/util/net.o 00:02:06.472 CC lib/util/pipe.o 00:02:06.472 CC lib/util/string.o 00:02:06.472 CC lib/util/strerror_tls.o 00:02:06.472 CC lib/util/zipf.o 00:02:06.472 CC lib/util/uuid.o 00:02:06.472 CC lib/util/xor.o 00:02:06.730 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.730 CC lib/vfio_user/host/vfio_user.o 00:02:06.730 LIB libspdk_dma.a 00:02:06.730 LIB libspdk_ioat.a 00:02:06.730 SO libspdk_dma.so.4.0 00:02:06.730 SO libspdk_ioat.so.7.0 00:02:06.730 SYMLINK libspdk_dma.so 00:02:06.730 SYMLINK libspdk_ioat.so 00:02:06.988 LIB libspdk_vfio_user.a 00:02:06.988 SO libspdk_vfio_user.so.5.0 00:02:06.988 SYMLINK libspdk_vfio_user.so 00:02:06.988 LIB libspdk_util.a 00:02:06.988 SO libspdk_util.so.9.1 00:02:07.246 SYMLINK libspdk_util.so 00:02:07.246 LIB libspdk_trace_parser.a 00:02:07.504 SO libspdk_trace_parser.so.5.0 00:02:07.504 CC lib/rdma_utils/rdma_utils.o 00:02:07.504 CC lib/vmd/vmd.o 00:02:07.504 CC lib/vmd/led.o 00:02:07.504 CC lib/idxd/idxd.o 00:02:07.504 CC lib/env_dpdk/env.o 00:02:07.504 CC lib/idxd/idxd_user.o 00:02:07.504 CC lib/env_dpdk/memory.o 00:02:07.504 CC lib/env_dpdk/init.o 00:02:07.504 CC lib/idxd/idxd_kernel.o 00:02:07.504 CC lib/env_dpdk/pci.o 00:02:07.504 CC lib/env_dpdk/threads.o 00:02:07.504 CC lib/env_dpdk/pci_ioat.o 00:02:07.504 CC lib/env_dpdk/pci_virtio.o 00:02:07.504 CC lib/env_dpdk/pci_vmd.o 00:02:07.504 CC lib/env_dpdk/pci_idxd.o 00:02:07.504 CC lib/env_dpdk/pci_event.o 00:02:07.504 CC lib/json/json_parse.o 00:02:07.504 CC lib/env_dpdk/sigbus_handler.o 00:02:07.504 CC lib/json/json_util.o 00:02:07.504 CC lib/env_dpdk/pci_dpdk.o 00:02:07.504 CC lib/json/json_write.o 00:02:07.504 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.504 CC lib/rdma_provider/common.o 00:02:07.504 CC lib/conf/conf.o 00:02:07.504 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.504 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:07.504 SYMLINK libspdk_trace_parser.so 00:02:07.766 LIB libspdk_rdma_provider.a 00:02:07.766 LIB libspdk_rdma_utils.a 00:02:07.766 LIB libspdk_conf.a 00:02:07.766 SO libspdk_rdma_provider.so.6.0 00:02:08.024 SO libspdk_rdma_utils.so.1.0 00:02:08.024 SO libspdk_conf.so.6.0 00:02:08.024 LIB libspdk_json.a 00:02:08.024 SYMLINK libspdk_rdma_provider.so 00:02:08.024 SYMLINK libspdk_conf.so 00:02:08.024 SYMLINK libspdk_rdma_utils.so 00:02:08.024 SO libspdk_json.so.6.0 00:02:08.024 SYMLINK libspdk_json.so 00:02:08.282 LIB libspdk_idxd.a 00:02:08.282 SO libspdk_idxd.so.12.0 00:02:08.282 LIB libspdk_vmd.a 00:02:08.282 SYMLINK libspdk_idxd.so 00:02:08.282 SO libspdk_vmd.so.6.0 00:02:08.282 SYMLINK libspdk_vmd.so 00:02:08.282 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.282 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.282 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.282 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.539 LIB libspdk_env_dpdk.a 00:02:08.539 SO libspdk_env_dpdk.so.15.0 00:02:08.539 LIB libspdk_jsonrpc.a 00:02:08.797 SO libspdk_jsonrpc.so.6.0 00:02:08.797 SYMLINK libspdk_env_dpdk.so 00:02:08.797 SYMLINK libspdk_jsonrpc.so 00:02:09.055 CC lib/rpc/rpc.o 00:02:09.313 LIB libspdk_rpc.a 00:02:09.313 SO libspdk_rpc.so.6.0 00:02:09.313 SYMLINK libspdk_rpc.so 00:02:09.570 CC lib/trace/trace.o 00:02:09.570 CC lib/trace/trace_flags.o 00:02:09.570 CC lib/trace/trace_rpc.o 00:02:09.570 CC lib/notify/notify.o 00:02:09.571 CC lib/notify/notify_rpc.o 00:02:09.571 CC lib/keyring/keyring.o 00:02:09.571 CC lib/keyring/keyring_rpc.o 00:02:09.827 LIB libspdk_notify.a 00:02:09.827 SO libspdk_notify.so.6.0 00:02:09.827 LIB libspdk_trace.a 00:02:09.827 LIB libspdk_keyring.a 00:02:09.827 SYMLINK libspdk_notify.so 00:02:10.085 SO libspdk_trace.so.10.0 00:02:10.085 SO libspdk_keyring.so.1.0 00:02:10.085 SYMLINK libspdk_trace.so 00:02:10.085 SYMLINK libspdk_keyring.so 00:02:10.342 CC lib/sock/sock.o 00:02:10.342 CC lib/sock/sock_rpc.o 00:02:10.342 CC lib/thread/thread.o 00:02:10.342 CC lib/thread/iobuf.o 00:02:10.906 LIB libspdk_sock.a 00:02:10.906 SO libspdk_sock.so.10.0 00:02:10.906 SYMLINK libspdk_sock.so 00:02:11.165 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.165 CC lib/nvme/nvme_ctrlr.o 00:02:11.165 CC lib/nvme/nvme_fabric.o 00:02:11.165 CC lib/nvme/nvme_ns_cmd.o 00:02:11.165 CC lib/nvme/nvme_ns.o 00:02:11.165 CC lib/nvme/nvme_qpair.o 00:02:11.165 CC lib/nvme/nvme_pcie_common.o 00:02:11.165 CC lib/nvme/nvme_pcie.o 00:02:11.165 CC lib/nvme/nvme.o 00:02:11.165 CC lib/nvme/nvme_quirks.o 00:02:11.165 CC lib/nvme/nvme_transport.o 00:02:11.165 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.165 CC lib/nvme/nvme_discovery.o 00:02:11.165 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.165 CC lib/nvme/nvme_tcp.o 00:02:11.165 CC lib/nvme/nvme_opal.o 00:02:11.165 CC lib/nvme/nvme_io_msg.o 00:02:11.165 CC lib/nvme/nvme_zns.o 00:02:11.165 CC lib/nvme/nvme_poll_group.o 00:02:11.165 CC lib/nvme/nvme_stubs.o 00:02:11.165 CC lib/nvme/nvme_auth.o 00:02:11.165 CC lib/nvme/nvme_vfio_user.o 00:02:11.165 CC lib/nvme/nvme_cuse.o 00:02:11.165 CC lib/nvme/nvme_rdma.o 00:02:12.098 LIB libspdk_thread.a 00:02:12.098 SO libspdk_thread.so.10.1 00:02:12.098 SYMLINK libspdk_thread.so 00:02:12.355 CC lib/init/json_config.o 00:02:12.355 CC lib/init/subsystem.o 00:02:12.355 CC lib/init/subsystem_rpc.o 00:02:12.355 CC lib/init/rpc.o 00:02:12.355 CC lib/vfu_tgt/tgt_endpoint.o 00:02:12.355 CC lib/vfu_tgt/tgt_rpc.o 00:02:12.355 CC lib/blob/blobstore.o 00:02:12.355 CC lib/blob/request.o 00:02:12.355 CC lib/blob/zeroes.o 00:02:12.355 CC lib/blob/blob_bs_dev.o 00:02:12.355 CC lib/virtio/virtio.o 00:02:12.355 CC lib/virtio/virtio_vhost_user.o 00:02:12.355 CC lib/accel/accel.o 00:02:12.355 CC lib/virtio/virtio_vfio_user.o 00:02:12.355 CC lib/accel/accel_rpc.o 00:02:12.355 CC lib/virtio/virtio_pci.o 00:02:12.355 CC lib/accel/accel_sw.o 00:02:12.613 LIB libspdk_init.a 00:02:12.613 SO libspdk_init.so.5.0 00:02:12.613 LIB libspdk_virtio.a 00:02:12.613 SYMLINK libspdk_init.so 00:02:12.613 SO libspdk_virtio.so.7.0 00:02:12.872 SYMLINK libspdk_virtio.so 00:02:12.872 CC lib/event/app.o 00:02:12.872 CC lib/event/reactor.o 00:02:12.872 CC lib/event/log_rpc.o 00:02:12.872 CC lib/event/app_rpc.o 00:02:12.872 CC lib/event/scheduler_static.o 00:02:12.872 LIB libspdk_vfu_tgt.a 00:02:13.130 SO libspdk_vfu_tgt.so.3.0 00:02:13.130 SYMLINK libspdk_vfu_tgt.so 00:02:13.387 LIB libspdk_accel.a 00:02:13.387 SO libspdk_accel.so.15.1 00:02:13.387 LIB libspdk_event.a 00:02:13.387 LIB libspdk_nvme.a 00:02:13.387 SO libspdk_event.so.14.0 00:02:13.387 SYMLINK libspdk_accel.so 00:02:13.387 SYMLINK libspdk_event.so 00:02:13.387 SO libspdk_nvme.so.13.1 00:02:13.644 CC lib/bdev/bdev_rpc.o 00:02:13.644 CC lib/bdev/bdev.o 00:02:13.644 CC lib/bdev/bdev_zone.o 00:02:13.644 CC lib/bdev/part.o 00:02:13.644 CC lib/bdev/scsi_nvme.o 00:02:13.902 SYMLINK libspdk_nvme.so 00:02:14.467 LIB libspdk_blob.a 00:02:14.467 SO libspdk_blob.so.11.0 00:02:14.467 SYMLINK libspdk_blob.so 00:02:14.725 CC lib/blobfs/blobfs.o 00:02:14.725 CC lib/blobfs/tree.o 00:02:14.725 CC lib/lvol/lvol.o 00:02:15.657 LIB libspdk_blobfs.a 00:02:15.657 SO libspdk_blobfs.so.10.0 00:02:15.657 LIB libspdk_lvol.a 00:02:15.657 SYMLINK libspdk_blobfs.so 00:02:15.657 SO libspdk_lvol.so.10.0 00:02:15.914 SYMLINK libspdk_lvol.so 00:02:16.479 LIB libspdk_bdev.a 00:02:16.479 SO libspdk_bdev.so.15.1 00:02:16.479 SYMLINK libspdk_bdev.so 00:02:16.738 CC lib/scsi/dev.o 00:02:16.738 CC lib/scsi/lun.o 00:02:16.738 CC lib/scsi/port.o 00:02:16.738 CC lib/nbd/nbd.o 00:02:16.738 CC lib/scsi/scsi.o 00:02:16.738 CC lib/scsi/scsi_pr.o 00:02:16.738 CC lib/nbd/nbd_rpc.o 00:02:16.738 CC lib/scsi/scsi_bdev.o 00:02:16.738 CC lib/scsi/scsi_rpc.o 00:02:16.738 CC lib/scsi/task.o 00:02:16.738 CC lib/ublk/ublk.o 00:02:16.738 CC lib/ublk/ublk_rpc.o 00:02:16.738 CC lib/nvmf/ctrlr.o 00:02:16.738 CC lib/nvmf/ctrlr_discovery.o 00:02:16.738 CC lib/nvmf/ctrlr_bdev.o 00:02:16.738 CC lib/nvmf/subsystem.o 00:02:16.738 CC lib/ftl/ftl_init.o 00:02:16.738 CC lib/nvmf/nvmf.o 00:02:16.738 CC lib/ftl/ftl_core.o 00:02:16.738 CC lib/nvmf/nvmf_rpc.o 00:02:16.738 CC lib/nvmf/tcp.o 00:02:16.738 CC lib/nvmf/transport.o 00:02:16.738 CC lib/ftl/ftl_layout.o 00:02:16.738 CC lib/ftl/ftl_debug.o 00:02:16.738 CC lib/nvmf/stubs.o 00:02:16.738 CC lib/nvmf/mdns_server.o 00:02:16.738 CC lib/ftl/ftl_io.o 00:02:16.738 CC lib/ftl/ftl_sb.o 00:02:16.738 CC lib/nvmf/vfio_user.o 00:02:16.738 CC lib/ftl/ftl_l2p.o 00:02:16.738 CC lib/nvmf/rdma.o 00:02:16.738 CC lib/ftl/ftl_l2p_flat.o 00:02:16.738 CC lib/nvmf/auth.o 00:02:16.738 CC lib/ftl/ftl_nv_cache.o 00:02:16.738 CC lib/ftl/ftl_band.o 00:02:16.738 CC lib/ftl/ftl_band_ops.o 00:02:16.738 CC lib/ftl/ftl_writer.o 00:02:16.738 CC lib/ftl/ftl_rq.o 00:02:16.738 CC lib/ftl/ftl_reloc.o 00:02:16.738 CC lib/ftl/ftl_l2p_cache.o 00:02:16.738 CC lib/ftl/ftl_p2l.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.738 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.738 CC lib/ftl/utils/ftl_conf.o 00:02:16.738 CC lib/ftl/utils/ftl_md.o 00:02:16.738 CC lib/ftl/utils/ftl_mempool.o 00:02:16.738 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.738 CC lib/ftl/utils/ftl_property.o 00:02:16.738 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.738 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.738 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.738 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.738 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.996 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.996 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:16.996 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.996 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.996 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.996 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.996 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.996 CC lib/ftl/ftl_trace.o 00:02:16.996 CC lib/ftl/base/ftl_base_dev.o 00:02:17.255 LIB libspdk_nbd.a 00:02:17.513 SO libspdk_nbd.so.7.0 00:02:17.513 LIB libspdk_scsi.a 00:02:17.513 SYMLINK libspdk_nbd.so 00:02:17.513 SO libspdk_scsi.so.9.0 00:02:17.513 SYMLINK libspdk_scsi.so 00:02:17.513 LIB libspdk_ublk.a 00:02:17.513 SO libspdk_ublk.so.3.0 00:02:17.771 SYMLINK libspdk_ublk.so 00:02:17.771 CC lib/iscsi/init_grp.o 00:02:17.771 CC lib/iscsi/iscsi.o 00:02:17.771 CC lib/iscsi/conn.o 00:02:17.771 CC lib/iscsi/md5.o 00:02:17.771 CC lib/iscsi/param.o 00:02:17.771 CC lib/vhost/vhost.o 00:02:17.771 CC lib/iscsi/portal_grp.o 00:02:17.771 CC lib/iscsi/iscsi_rpc.o 00:02:17.771 CC lib/vhost/vhost_rpc.o 00:02:17.771 CC lib/iscsi/tgt_node.o 00:02:17.771 CC lib/vhost/vhost_scsi.o 00:02:17.771 CC lib/iscsi/iscsi_subsystem.o 00:02:17.771 CC lib/vhost/vhost_blk.o 00:02:17.771 CC lib/vhost/rte_vhost_user.o 00:02:17.771 CC lib/iscsi/task.o 00:02:18.336 LIB libspdk_ftl.a 00:02:18.336 SO libspdk_ftl.so.9.0 00:02:18.905 SYMLINK libspdk_ftl.so 00:02:18.905 LIB libspdk_vhost.a 00:02:19.163 SO libspdk_vhost.so.8.0 00:02:19.163 SYMLINK libspdk_vhost.so 00:02:19.163 LIB libspdk_nvmf.a 00:02:19.163 LIB libspdk_iscsi.a 00:02:19.163 SO libspdk_nvmf.so.19.0 00:02:19.421 SO libspdk_iscsi.so.8.0 00:02:19.421 SYMLINK libspdk_iscsi.so 00:02:19.421 SYMLINK libspdk_nvmf.so 00:02:20.124 CC module/vfu_device/vfu_virtio.o 00:02:20.124 CC module/vfu_device/vfu_virtio_blk.o 00:02:20.124 CC module/vfu_device/vfu_virtio_scsi.o 00:02:20.124 CC module/vfu_device/vfu_virtio_rpc.o 00:02:20.124 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.124 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.124 CC module/accel/iaa/accel_iaa.o 00:02:20.124 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.124 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.124 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.124 CC module/blob/bdev/blob_bdev.o 00:02:20.124 CC module/accel/ioat/accel_ioat.o 00:02:20.124 CC module/accel/dsa/accel_dsa.o 00:02:20.124 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.124 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.124 CC module/keyring/file/keyring.o 00:02:20.124 CC module/keyring/file/keyring_rpc.o 00:02:20.124 CC module/accel/error/accel_error.o 00:02:20.124 CC module/sock/posix/posix.o 00:02:20.124 CC module/accel/error/accel_error_rpc.o 00:02:20.124 CC module/keyring/linux/keyring.o 00:02:20.124 CC module/keyring/linux/keyring_rpc.o 00:02:20.124 LIB libspdk_env_dpdk_rpc.a 00:02:20.124 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.382 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.383 LIB libspdk_scheduler_gscheduler.a 00:02:20.383 LIB libspdk_keyring_file.a 00:02:20.383 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.383 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.383 LIB libspdk_keyring_linux.a 00:02:20.383 SO libspdk_keyring_file.so.1.0 00:02:20.383 LIB libspdk_scheduler_dynamic.a 00:02:20.383 LIB libspdk_accel_error.a 00:02:20.383 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.383 LIB libspdk_accel_iaa.a 00:02:20.383 SO libspdk_keyring_linux.so.1.0 00:02:20.383 LIB libspdk_accel_ioat.a 00:02:20.383 SO libspdk_accel_error.so.2.0 00:02:20.383 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.383 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.383 SO libspdk_accel_ioat.so.6.0 00:02:20.383 SO libspdk_accel_iaa.so.3.0 00:02:20.383 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.383 SYMLINK libspdk_keyring_file.so 00:02:20.383 LIB libspdk_blob_bdev.a 00:02:20.383 SYMLINK libspdk_keyring_linux.so 00:02:20.383 LIB libspdk_accel_dsa.a 00:02:20.383 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.383 SYMLINK libspdk_accel_error.so 00:02:20.383 SO libspdk_accel_dsa.so.5.0 00:02:20.383 SO libspdk_blob_bdev.so.11.0 00:02:20.383 SYMLINK libspdk_accel_iaa.so 00:02:20.383 SYMLINK libspdk_accel_ioat.so 00:02:20.640 SYMLINK libspdk_blob_bdev.so 00:02:20.640 SYMLINK libspdk_accel_dsa.so 00:02:20.640 LIB libspdk_vfu_device.a 00:02:20.640 SO libspdk_vfu_device.so.3.0 00:02:20.898 SYMLINK libspdk_vfu_device.so 00:02:20.898 LIB libspdk_sock_posix.a 00:02:20.898 SO libspdk_sock_posix.so.6.0 00:02:20.898 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.898 CC module/bdev/delay/vbdev_delay.o 00:02:20.898 CC module/bdev/gpt/gpt.o 00:02:20.898 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.898 CC module/bdev/raid/bdev_raid.o 00:02:20.898 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.898 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.898 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.898 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.898 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.898 CC module/bdev/nvme/bdev_nvme.o 00:02:20.898 CC module/bdev/raid/raid0.o 00:02:20.898 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.898 CC module/bdev/raid/concat.o 00:02:20.898 CC module/bdev/raid/raid1.o 00:02:20.898 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.898 CC module/bdev/nvme/nvme_rpc.o 00:02:20.898 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.898 CC module/bdev/nvme/vbdev_opal.o 00:02:20.898 CC module/bdev/split/vbdev_split.o 00:02:20.898 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.898 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.898 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.898 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.898 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.898 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.898 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.898 CC module/bdev/error/vbdev_error.o 00:02:20.898 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.898 CC module/bdev/null/bdev_null.o 00:02:20.898 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.898 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.898 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.898 CC module/bdev/malloc/bdev_malloc.o 00:02:20.898 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.898 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.898 CC module/bdev/null/bdev_null_rpc.o 00:02:20.898 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.157 CC module/bdev/ftl/bdev_ftl.o 00:02:21.157 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.157 CC module/bdev/aio/bdev_aio.o 00:02:21.157 CC module/bdev/aio/bdev_aio_rpc.o 00:02:21.157 SYMLINK libspdk_sock_posix.so 00:02:21.414 LIB libspdk_blobfs_bdev.a 00:02:21.414 SO libspdk_blobfs_bdev.so.6.0 00:02:21.414 LIB libspdk_bdev_split.a 00:02:21.414 SO libspdk_bdev_split.so.6.0 00:02:21.414 LIB libspdk_bdev_gpt.a 00:02:21.414 SYMLINK libspdk_blobfs_bdev.so 00:02:21.414 LIB libspdk_bdev_error.a 00:02:21.414 LIB libspdk_bdev_null.a 00:02:21.414 LIB libspdk_bdev_passthru.a 00:02:21.414 LIB libspdk_bdev_ftl.a 00:02:21.414 SO libspdk_bdev_gpt.so.6.0 00:02:21.414 SO libspdk_bdev_error.so.6.0 00:02:21.414 LIB libspdk_bdev_iscsi.a 00:02:21.414 SO libspdk_bdev_null.so.6.0 00:02:21.414 SO libspdk_bdev_passthru.so.6.0 00:02:21.414 SYMLINK libspdk_bdev_split.so 00:02:21.414 SO libspdk_bdev_ftl.so.6.0 00:02:21.414 LIB libspdk_bdev_zone_block.a 00:02:21.414 SO libspdk_bdev_iscsi.so.6.0 00:02:21.414 LIB libspdk_bdev_malloc.a 00:02:21.414 LIB libspdk_bdev_delay.a 00:02:21.414 SYMLINK libspdk_bdev_gpt.so 00:02:21.414 SYMLINK libspdk_bdev_null.so 00:02:21.414 SO libspdk_bdev_zone_block.so.6.0 00:02:21.414 SYMLINK libspdk_bdev_error.so 00:02:21.414 SYMLINK libspdk_bdev_passthru.so 00:02:21.671 SO libspdk_bdev_malloc.so.6.0 00:02:21.671 SYMLINK libspdk_bdev_ftl.so 00:02:21.671 SO libspdk_bdev_delay.so.6.0 00:02:21.671 SYMLINK libspdk_bdev_iscsi.so 00:02:21.671 LIB libspdk_bdev_lvol.a 00:02:21.671 SYMLINK libspdk_bdev_zone_block.so 00:02:21.671 SYMLINK libspdk_bdev_malloc.so 00:02:21.671 SO libspdk_bdev_lvol.so.6.0 00:02:21.671 SYMLINK libspdk_bdev_delay.so 00:02:21.671 LIB libspdk_bdev_virtio.a 00:02:21.671 SO libspdk_bdev_virtio.so.6.0 00:02:21.671 SYMLINK libspdk_bdev_lvol.so 00:02:21.671 SYMLINK libspdk_bdev_virtio.so 00:02:21.671 LIB libspdk_bdev_aio.a 00:02:21.671 SO libspdk_bdev_aio.so.6.0 00:02:21.929 SYMLINK libspdk_bdev_aio.so 00:02:22.188 LIB libspdk_bdev_raid.a 00:02:22.188 SO libspdk_bdev_raid.so.6.0 00:02:22.188 SYMLINK libspdk_bdev_raid.so 00:02:23.563 LIB libspdk_bdev_nvme.a 00:02:23.563 SO libspdk_bdev_nvme.so.7.0 00:02:23.563 SYMLINK libspdk_bdev_nvme.so 00:02:24.129 CC module/event/subsystems/sock/sock.o 00:02:24.129 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.129 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.129 CC module/event/subsystems/keyring/keyring.o 00:02:24.129 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:24.129 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.129 CC module/event/subsystems/vmd/vmd.o 00:02:24.129 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.129 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.129 LIB libspdk_event_sock.a 00:02:24.388 LIB libspdk_event_vhost_blk.a 00:02:24.388 SO libspdk_event_sock.so.5.0 00:02:24.388 LIB libspdk_event_keyring.a 00:02:24.388 LIB libspdk_event_iobuf.a 00:02:24.388 LIB libspdk_event_scheduler.a 00:02:24.388 SO libspdk_event_vhost_blk.so.3.0 00:02:24.388 LIB libspdk_event_vfu_tgt.a 00:02:24.388 LIB libspdk_event_vmd.a 00:02:24.388 SYMLINK libspdk_event_sock.so 00:02:24.388 SO libspdk_event_iobuf.so.3.0 00:02:24.388 SO libspdk_event_keyring.so.1.0 00:02:24.388 SO libspdk_event_scheduler.so.4.0 00:02:24.388 SO libspdk_event_vfu_tgt.so.3.0 00:02:24.388 SO libspdk_event_vmd.so.6.0 00:02:24.388 SYMLINK libspdk_event_vhost_blk.so 00:02:24.388 SYMLINK libspdk_event_keyring.so 00:02:24.388 SYMLINK libspdk_event_iobuf.so 00:02:24.388 SYMLINK libspdk_event_vfu_tgt.so 00:02:24.388 SYMLINK libspdk_event_scheduler.so 00:02:24.388 SYMLINK libspdk_event_vmd.so 00:02:24.646 CC module/event/subsystems/accel/accel.o 00:02:24.906 LIB libspdk_event_accel.a 00:02:24.906 SO libspdk_event_accel.so.6.0 00:02:24.906 SYMLINK libspdk_event_accel.so 00:02:25.164 CC module/event/subsystems/bdev/bdev.o 00:02:25.422 LIB libspdk_event_bdev.a 00:02:25.422 SO libspdk_event_bdev.so.6.0 00:02:25.422 SYMLINK libspdk_event_bdev.so 00:02:25.680 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.680 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.680 CC module/event/subsystems/ublk/ublk.o 00:02:25.680 CC module/event/subsystems/scsi/scsi.o 00:02:25.680 CC module/event/subsystems/nbd/nbd.o 00:02:25.943 LIB libspdk_event_scsi.a 00:02:25.943 LIB libspdk_event_ublk.a 00:02:25.943 LIB libspdk_event_nbd.a 00:02:25.943 SO libspdk_event_scsi.so.6.0 00:02:25.943 SO libspdk_event_ublk.so.3.0 00:02:25.943 SO libspdk_event_nbd.so.6.0 00:02:25.943 LIB libspdk_event_nvmf.a 00:02:25.943 SYMLINK libspdk_event_scsi.so 00:02:25.943 SYMLINK libspdk_event_ublk.so 00:02:25.943 SYMLINK libspdk_event_nbd.so 00:02:25.943 SO libspdk_event_nvmf.so.6.0 00:02:26.201 SYMLINK libspdk_event_nvmf.so 00:02:26.201 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.201 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.460 LIB libspdk_event_vhost_scsi.a 00:02:26.460 LIB libspdk_event_iscsi.a 00:02:26.460 SO libspdk_event_vhost_scsi.so.3.0 00:02:26.460 SO libspdk_event_iscsi.so.6.0 00:02:26.719 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.719 SYMLINK libspdk_event_iscsi.so 00:02:26.719 SO libspdk.so.6.0 00:02:26.719 SYMLINK libspdk.so 00:02:26.977 CXX app/trace/trace.o 00:02:26.977 CC test/rpc_client/rpc_client_test.o 00:02:27.243 TEST_HEADER include/spdk/accel.h 00:02:27.243 TEST_HEADER include/spdk/assert.h 00:02:27.243 TEST_HEADER include/spdk/accel_module.h 00:02:27.243 TEST_HEADER include/spdk/base64.h 00:02:27.243 TEST_HEADER include/spdk/bdev.h 00:02:27.243 TEST_HEADER include/spdk/barrier.h 00:02:27.243 TEST_HEADER include/spdk/bdev_module.h 00:02:27.243 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.243 TEST_HEADER include/spdk/bit_array.h 00:02:27.243 TEST_HEADER include/spdk/bit_pool.h 00:02:27.243 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.243 TEST_HEADER include/spdk/blob.h 00:02:27.243 TEST_HEADER include/spdk/blobfs.h 00:02:27.243 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.243 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.243 TEST_HEADER include/spdk/cpuset.h 00:02:27.243 TEST_HEADER include/spdk/config.h 00:02:27.243 TEST_HEADER include/spdk/conf.h 00:02:27.243 TEST_HEADER include/spdk/crc16.h 00:02:27.243 TEST_HEADER include/spdk/crc64.h 00:02:27.243 TEST_HEADER include/spdk/crc32.h 00:02:27.243 TEST_HEADER include/spdk/dif.h 00:02:27.243 CC app/spdk_top/spdk_top.o 00:02:27.243 CC app/spdk_nvme_identify/identify.o 00:02:27.243 TEST_HEADER include/spdk/endian.h 00:02:27.243 TEST_HEADER include/spdk/dma.h 00:02:27.243 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.243 CC app/spdk_lspci/spdk_lspci.o 00:02:27.243 TEST_HEADER include/spdk/event.h 00:02:27.243 CC app/spdk_nvme_perf/perf.o 00:02:27.243 CC app/trace_record/trace_record.o 00:02:27.243 TEST_HEADER include/spdk/env.h 00:02:27.243 TEST_HEADER include/spdk/fd_group.h 00:02:27.243 TEST_HEADER include/spdk/fd.h 00:02:27.243 TEST_HEADER include/spdk/ftl.h 00:02:27.243 TEST_HEADER include/spdk/file.h 00:02:27.243 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.243 TEST_HEADER include/spdk/histogram_data.h 00:02:27.243 TEST_HEADER include/spdk/hexlify.h 00:02:27.243 TEST_HEADER include/spdk/idxd.h 00:02:27.243 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.243 TEST_HEADER include/spdk/init.h 00:02:27.243 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.243 TEST_HEADER include/spdk/ioat.h 00:02:27.243 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.243 TEST_HEADER include/spdk/json.h 00:02:27.243 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.243 TEST_HEADER include/spdk/keyring.h 00:02:27.243 TEST_HEADER include/spdk/keyring_module.h 00:02:27.243 TEST_HEADER include/spdk/likely.h 00:02:27.243 TEST_HEADER include/spdk/log.h 00:02:27.243 TEST_HEADER include/spdk/lvol.h 00:02:27.243 TEST_HEADER include/spdk/memory.h 00:02:27.243 TEST_HEADER include/spdk/mmio.h 00:02:27.243 TEST_HEADER include/spdk/nbd.h 00:02:27.243 TEST_HEADER include/spdk/net.h 00:02:27.243 TEST_HEADER include/spdk/notify.h 00:02:27.243 TEST_HEADER include/spdk/nvme.h 00:02:27.243 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.243 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.243 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.243 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.243 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.243 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.243 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.243 TEST_HEADER include/spdk/nvmf.h 00:02:27.243 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.243 TEST_HEADER include/spdk/opal_spec.h 00:02:27.243 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.243 TEST_HEADER include/spdk/opal.h 00:02:27.243 TEST_HEADER include/spdk/pci_ids.h 00:02:27.243 TEST_HEADER include/spdk/pipe.h 00:02:27.243 TEST_HEADER include/spdk/queue.h 00:02:27.243 TEST_HEADER include/spdk/reduce.h 00:02:27.243 TEST_HEADER include/spdk/rpc.h 00:02:27.243 TEST_HEADER include/spdk/scheduler.h 00:02:27.243 TEST_HEADER include/spdk/scsi.h 00:02:27.243 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.243 TEST_HEADER include/spdk/sock.h 00:02:27.244 TEST_HEADER include/spdk/stdinc.h 00:02:27.244 TEST_HEADER include/spdk/string.h 00:02:27.244 TEST_HEADER include/spdk/thread.h 00:02:27.244 TEST_HEADER include/spdk/trace_parser.h 00:02:27.244 TEST_HEADER include/spdk/trace.h 00:02:27.244 TEST_HEADER include/spdk/ublk.h 00:02:27.244 TEST_HEADER include/spdk/tree.h 00:02:27.244 TEST_HEADER include/spdk/util.h 00:02:27.244 TEST_HEADER include/spdk/uuid.h 00:02:27.244 TEST_HEADER include/spdk/version.h 00:02:27.244 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.244 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.244 TEST_HEADER include/spdk/vhost.h 00:02:27.244 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.244 TEST_HEADER include/spdk/vmd.h 00:02:27.244 TEST_HEADER include/spdk/xor.h 00:02:27.244 CXX test/cpp_headers/accel.o 00:02:27.244 TEST_HEADER include/spdk/zipf.h 00:02:27.244 CXX test/cpp_headers/accel_module.o 00:02:27.244 CXX test/cpp_headers/assert.o 00:02:27.244 CC app/nvmf_tgt/nvmf_main.o 00:02:27.244 CXX test/cpp_headers/barrier.o 00:02:27.244 CXX test/cpp_headers/base64.o 00:02:27.244 CXX test/cpp_headers/bdev_module.o 00:02:27.244 CXX test/cpp_headers/bdev.o 00:02:27.244 CXX test/cpp_headers/bit_array.o 00:02:27.244 CXX test/cpp_headers/bdev_zone.o 00:02:27.244 CXX test/cpp_headers/bit_pool.o 00:02:27.244 CXX test/cpp_headers/blobfs_bdev.o 00:02:27.244 CXX test/cpp_headers/blob_bdev.o 00:02:27.244 CXX test/cpp_headers/blobfs.o 00:02:27.244 CXX test/cpp_headers/blob.o 00:02:27.244 CXX test/cpp_headers/cpuset.o 00:02:27.244 CXX test/cpp_headers/config.o 00:02:27.244 CXX test/cpp_headers/conf.o 00:02:27.244 CXX test/cpp_headers/crc16.o 00:02:27.244 CXX test/cpp_headers/crc32.o 00:02:27.244 CXX test/cpp_headers/crc64.o 00:02:27.244 CXX test/cpp_headers/dif.o 00:02:27.244 CXX test/cpp_headers/dma.o 00:02:27.244 CXX test/cpp_headers/endian.o 00:02:27.244 CXX test/cpp_headers/env_dpdk.o 00:02:27.244 CXX test/cpp_headers/env.o 00:02:27.244 CXX test/cpp_headers/event.o 00:02:27.244 CXX test/cpp_headers/fd.o 00:02:27.244 CXX test/cpp_headers/fd_group.o 00:02:27.244 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.244 CXX test/cpp_headers/file.o 00:02:27.244 CXX test/cpp_headers/gpt_spec.o 00:02:27.244 CXX test/cpp_headers/ftl.o 00:02:27.244 CXX test/cpp_headers/hexlify.o 00:02:27.244 CXX test/cpp_headers/histogram_data.o 00:02:27.244 CXX test/cpp_headers/idxd.o 00:02:27.244 CC app/spdk_tgt/spdk_tgt.o 00:02:27.244 CXX test/cpp_headers/idxd_spec.o 00:02:27.244 CC app/spdk_dd/spdk_dd.o 00:02:27.244 CXX test/cpp_headers/init.o 00:02:27.244 CXX test/cpp_headers/ioat.o 00:02:27.244 CXX test/cpp_headers/ioat_spec.o 00:02:27.244 CXX test/cpp_headers/iscsi_spec.o 00:02:27.244 CXX test/cpp_headers/json.o 00:02:27.244 CXX test/cpp_headers/jsonrpc.o 00:02:27.244 CXX test/cpp_headers/keyring.o 00:02:27.244 CXX test/cpp_headers/keyring_module.o 00:02:27.244 CXX test/cpp_headers/likely.o 00:02:27.244 CXX test/cpp_headers/log.o 00:02:27.244 CXX test/cpp_headers/lvol.o 00:02:27.244 CXX test/cpp_headers/memory.o 00:02:27.244 CXX test/cpp_headers/mmio.o 00:02:27.244 CXX test/cpp_headers/nbd.o 00:02:27.244 CXX test/cpp_headers/notify.o 00:02:27.244 CXX test/cpp_headers/nvme.o 00:02:27.244 CXX test/cpp_headers/net.o 00:02:27.244 CXX test/cpp_headers/nvme_intel.o 00:02:27.244 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.244 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.244 CXX test/cpp_headers/nvme_spec.o 00:02:27.244 CXX test/cpp_headers/nvme_zns.o 00:02:27.244 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.244 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.244 CXX test/cpp_headers/nvmf.o 00:02:27.244 CXX test/cpp_headers/nvmf_spec.o 00:02:27.244 CXX test/cpp_headers/nvmf_transport.o 00:02:27.244 CXX test/cpp_headers/opal.o 00:02:27.244 CXX test/cpp_headers/opal_spec.o 00:02:27.244 CXX test/cpp_headers/pci_ids.o 00:02:27.244 CXX test/cpp_headers/pipe.o 00:02:27.244 CXX test/cpp_headers/queue.o 00:02:27.244 CXX test/cpp_headers/reduce.o 00:02:27.244 CXX test/cpp_headers/scheduler.o 00:02:27.244 CXX test/cpp_headers/rpc.o 00:02:27.244 CXX test/cpp_headers/scsi.o 00:02:27.244 CXX test/cpp_headers/scsi_spec.o 00:02:27.244 CXX test/cpp_headers/sock.o 00:02:27.244 CXX test/cpp_headers/stdinc.o 00:02:27.244 CXX test/cpp_headers/string.o 00:02:27.244 CXX test/cpp_headers/thread.o 00:02:27.244 CXX test/cpp_headers/trace.o 00:02:27.244 CXX test/cpp_headers/trace_parser.o 00:02:27.244 CXX test/cpp_headers/tree.o 00:02:27.244 CXX test/cpp_headers/ublk.o 00:02:27.244 CXX test/cpp_headers/util.o 00:02:27.244 CXX test/cpp_headers/version.o 00:02:27.244 CXX test/cpp_headers/uuid.o 00:02:27.521 CC test/thread/poller_perf/poller_perf.o 00:02:27.521 CC examples/util/zipf/zipf.o 00:02:27.521 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.521 CC test/app/histogram_perf/histogram_perf.o 00:02:27.521 CC test/app/stub/stub.o 00:02:27.521 CC test/env/memory/memory_ut.o 00:02:27.521 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.521 CC examples/ioat/verify/verify.o 00:02:27.521 CC examples/ioat/perf/perf.o 00:02:27.521 CC test/env/vtophys/vtophys.o 00:02:27.521 CC test/app/jsoncat/jsoncat.o 00:02:27.521 CC test/env/pci/pci_ut.o 00:02:27.521 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.521 CC app/fio/nvme/fio_plugin.o 00:02:27.521 CC test/app/bdev_svc/bdev_svc.o 00:02:27.521 CC app/fio/bdev/fio_plugin.o 00:02:27.521 CC test/dma/test_dma/test_dma.o 00:02:27.786 LINK spdk_lspci 00:02:27.786 LINK rpc_client_test 00:02:28.050 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.050 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:28.051 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.051 LINK spdk_trace_record 00:02:28.051 CXX test/cpp_headers/vhost.o 00:02:28.051 LINK interrupt_tgt 00:02:28.051 CXX test/cpp_headers/vmd.o 00:02:28.051 CXX test/cpp_headers/xor.o 00:02:28.051 CXX test/cpp_headers/zipf.o 00:02:28.051 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.051 LINK nvmf_tgt 00:02:28.051 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.051 LINK vtophys 00:02:28.051 LINK zipf 00:02:28.051 LINK env_dpdk_post_init 00:02:28.051 LINK spdk_nvme_discover 00:02:28.051 LINK iscsi_tgt 00:02:28.051 LINK spdk_trace 00:02:28.051 LINK poller_perf 00:02:28.051 LINK ioat_perf 00:02:28.310 LINK bdev_svc 00:02:28.310 LINK histogram_perf 00:02:28.310 LINK jsoncat 00:02:28.310 LINK stub 00:02:28.310 LINK spdk_tgt 00:02:28.310 LINK verify 00:02:28.310 LINK test_dma 00:02:28.310 LINK pci_ut 00:02:28.567 LINK spdk_dd 00:02:28.567 LINK spdk_bdev 00:02:28.567 LINK nvme_fuzz 00:02:28.567 LINK vhost_fuzz 00:02:28.567 CC examples/idxd/perf/perf.o 00:02:28.567 CC examples/vmd/lsvmd/lsvmd.o 00:02:28.567 CC app/vhost/vhost.o 00:02:28.567 CC examples/sock/hello_world/hello_sock.o 00:02:28.567 CC examples/vmd/led/led.o 00:02:28.567 CC test/event/event_perf/event_perf.o 00:02:28.567 CC test/event/reactor_perf/reactor_perf.o 00:02:28.567 CC test/event/reactor/reactor.o 00:02:28.567 CC examples/thread/thread/thread_ex.o 00:02:28.567 CC test/event/app_repeat/app_repeat.o 00:02:28.567 LINK spdk_nvme_identify 00:02:28.567 LINK spdk_nvme 00:02:28.567 LINK spdk_nvme_perf 00:02:28.825 CC test/event/scheduler/scheduler.o 00:02:28.825 LINK lsvmd 00:02:28.825 LINK led 00:02:28.825 LINK mem_callbacks 00:02:28.825 LINK event_perf 00:02:28.825 LINK reactor 00:02:28.825 LINK reactor_perf 00:02:28.825 LINK vhost 00:02:28.825 LINK app_repeat 00:02:28.825 LINK hello_sock 00:02:28.825 CC test/nvme/err_injection/err_injection.o 00:02:28.825 CC test/nvme/sgl/sgl.o 00:02:28.825 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.825 CC test/nvme/simple_copy/simple_copy.o 00:02:28.825 CC test/nvme/overhead/overhead.o 00:02:28.825 CC test/nvme/e2edp/nvme_dp.o 00:02:28.825 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.825 CC test/nvme/aer/aer.o 00:02:28.825 CC test/nvme/boot_partition/boot_partition.o 00:02:28.825 CC test/nvme/cuse/cuse.o 00:02:28.825 CC test/nvme/reserve/reserve.o 00:02:28.826 CC test/nvme/startup/startup.o 00:02:28.826 CC test/nvme/connect_stress/connect_stress.o 00:02:28.826 LINK idxd_perf 00:02:29.084 LINK spdk_top 00:02:29.084 LINK thread 00:02:29.084 CC test/nvme/compliance/nvme_compliance.o 00:02:29.084 CC test/nvme/reset/reset.o 00:02:29.084 CC test/nvme/fdp/fdp.o 00:02:29.084 CC test/accel/dif/dif.o 00:02:29.084 CC test/blobfs/mkfs/mkfs.o 00:02:29.084 LINK scheduler 00:02:29.084 CC test/lvol/esnap/esnap.o 00:02:29.084 LINK memory_ut 00:02:29.084 LINK err_injection 00:02:29.084 LINK doorbell_aers 00:02:29.084 LINK boot_partition 00:02:29.084 LINK startup 00:02:29.084 LINK connect_stress 00:02:29.084 LINK fused_ordering 00:02:29.342 LINK reserve 00:02:29.342 LINK simple_copy 00:02:29.342 LINK sgl 00:02:29.342 LINK mkfs 00:02:29.342 LINK nvme_dp 00:02:29.342 LINK aer 00:02:29.342 LINK reset 00:02:29.342 LINK overhead 00:02:29.342 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.342 CC examples/nvme/hello_world/hello_world.o 00:02:29.342 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.342 LINK nvme_compliance 00:02:29.342 LINK fdp 00:02:29.342 CC examples/nvme/arbitration/arbitration.o 00:02:29.342 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.342 CC examples/nvme/abort/abort.o 00:02:29.342 CC examples/nvme/reconnect/reconnect.o 00:02:29.342 CC examples/nvme/hotplug/hotplug.o 00:02:29.342 LINK dif 00:02:29.600 CC examples/accel/perf/accel_perf.o 00:02:29.600 CC examples/blob/hello_world/hello_blob.o 00:02:29.600 CC examples/blob/cli/blobcli.o 00:02:29.600 LINK pmr_persistence 00:02:29.600 LINK cmb_copy 00:02:29.600 LINK hello_world 00:02:29.600 LINK hotplug 00:02:29.600 LINK arbitration 00:02:29.858 LINK reconnect 00:02:29.858 LINK abort 00:02:29.858 LINK hello_blob 00:02:29.858 LINK iscsi_fuzz 00:02:29.858 LINK nvme_manage 00:02:29.858 LINK cuse 00:02:29.858 LINK accel_perf 00:02:30.118 CC test/bdev/bdevio/bdevio.o 00:02:30.118 LINK blobcli 00:02:30.378 LINK bdevio 00:02:30.378 CC examples/bdev/bdevperf/bdevperf.o 00:02:30.635 CC examples/bdev/hello_world/hello_bdev.o 00:02:30.893 LINK hello_bdev 00:02:31.157 LINK bdevperf 00:02:31.725 CC examples/nvmf/nvmf/nvmf.o 00:02:32.294 LINK nvmf 00:02:34.196 LINK esnap 00:02:34.455 00:02:34.455 real 0m52.010s 00:02:34.455 user 8m13.676s 00:02:34.455 sys 4m8.599s 00:02:34.455 20:00:59 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:34.455 20:00:59 make -- common/autotest_common.sh@10 -- $ set +x 00:02:34.455 ************************************ 00:02:34.455 END TEST make 00:02:34.455 ************************************ 00:02:34.455 20:00:59 -- common/autotest_common.sh@1142 -- $ return 0 00:02:34.455 20:00:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:34.455 20:00:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:34.455 20:00:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:34.455 20:00:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.455 20:00:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:34.455 20:00:59 -- pm/common@44 -- $ pid=3920713 00:02:34.455 20:00:59 -- pm/common@50 -- $ kill -TERM 3920713 00:02:34.455 20:00:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.455 20:00:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:34.455 20:00:59 -- pm/common@44 -- $ pid=3920714 00:02:34.455 20:00:59 -- pm/common@50 -- $ kill -TERM 3920714 00:02:34.455 20:00:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.455 20:00:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:34.455 20:00:59 -- pm/common@44 -- $ pid=3920715 00:02:34.455 20:00:59 -- pm/common@50 -- $ kill -TERM 3920715 00:02:34.455 20:00:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.455 20:00:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:34.455 20:00:59 -- pm/common@44 -- $ pid=3920736 00:02:34.455 20:00:59 -- pm/common@50 -- $ sudo -E kill -TERM 3920736 00:02:34.714 20:00:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:34.714 20:00:59 -- nvmf/common.sh@7 -- # uname -s 00:02:34.714 20:00:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:34.714 20:00:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:34.714 20:00:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:34.714 20:00:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:34.714 20:00:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:34.714 20:00:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:34.714 20:00:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:34.714 20:00:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:34.714 20:00:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:34.714 20:00:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:34.714 20:00:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:34.714 20:00:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:34.714 20:00:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:34.714 20:00:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:34.714 20:00:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:34.714 20:00:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:34.714 20:00:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:34.714 20:00:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:34.714 20:00:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.714 20:00:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.714 20:00:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.714 20:00:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.714 20:00:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.714 20:00:59 -- paths/export.sh@5 -- # export PATH 00:02:34.714 20:00:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.714 20:00:59 -- nvmf/common.sh@47 -- # : 0 00:02:34.714 20:00:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:34.714 20:00:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:34.714 20:00:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:34.714 20:00:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:34.714 20:00:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:34.714 20:00:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:34.714 20:00:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:34.714 20:00:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:34.714 20:00:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:34.714 20:00:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:34.714 20:00:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:34.714 20:00:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:34.714 20:00:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.714 20:00:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:34.714 20:00:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.714 20:00:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:34.714 20:00:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:34.714 20:00:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:34.714 20:00:59 -- spdk/autotest.sh@48 -- # udevadm_pid=3982999 00:02:34.714 20:00:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:34.714 20:00:59 -- pm/common@17 -- # local monitor 00:02:34.714 20:00:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.714 20:00:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:34.714 20:00:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.714 20:00:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.714 20:00:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:34.714 20:00:59 -- pm/common@25 -- # sleep 1 00:02:34.714 20:00:59 -- pm/common@21 -- # date +%s 00:02:34.714 20:00:59 -- pm/common@21 -- # date +%s 00:02:34.714 20:00:59 -- pm/common@21 -- # date +%s 00:02:34.714 20:00:59 -- pm/common@21 -- # date +%s 00:02:34.714 20:00:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066459 00:02:34.714 20:00:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066459 00:02:34.714 20:00:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066459 00:02:34.714 20:00:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721066459 00:02:34.714 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066459_collect-vmstat.pm.log 00:02:34.714 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066459_collect-cpu-load.pm.log 00:02:34.714 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066459_collect-cpu-temp.pm.log 00:02:34.714 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721066459_collect-bmc-pm.bmc.pm.log 00:02:35.648 20:01:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.648 20:01:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:35.648 20:01:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:35.648 20:01:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.648 20:01:00 -- spdk/autotest.sh@59 -- # create_test_list 00:02:35.648 20:01:00 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:35.648 20:01:00 -- common/autotest_common.sh@10 -- # set +x 00:02:35.648 20:01:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:35.648 20:01:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.648 20:01:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.648 20:01:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:35.648 20:01:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.648 20:01:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:35.648 20:01:00 -- common/autotest_common.sh@1455 -- # uname 00:02:35.648 20:01:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:35.648 20:01:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:35.648 20:01:00 -- common/autotest_common.sh@1475 -- # uname 00:02:35.648 20:01:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:35.648 20:01:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:35.648 20:01:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:35.648 20:01:00 -- spdk/autotest.sh@72 -- # hash lcov 00:02:35.648 20:01:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:35.648 20:01:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:35.648 --rc lcov_branch_coverage=1 00:02:35.648 --rc lcov_function_coverage=1 00:02:35.648 --rc genhtml_branch_coverage=1 00:02:35.648 --rc genhtml_function_coverage=1 00:02:35.648 --rc genhtml_legend=1 00:02:35.648 --rc geninfo_all_blocks=1 00:02:35.648 ' 00:02:35.648 20:01:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:35.648 --rc lcov_branch_coverage=1 00:02:35.648 --rc lcov_function_coverage=1 00:02:35.648 --rc genhtml_branch_coverage=1 00:02:35.648 --rc genhtml_function_coverage=1 00:02:35.648 --rc genhtml_legend=1 00:02:35.648 --rc geninfo_all_blocks=1 00:02:35.648 ' 00:02:35.648 20:01:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:35.648 --rc lcov_branch_coverage=1 00:02:35.648 --rc lcov_function_coverage=1 00:02:35.648 --rc genhtml_branch_coverage=1 00:02:35.648 --rc genhtml_function_coverage=1 00:02:35.648 --rc genhtml_legend=1 00:02:35.648 --rc geninfo_all_blocks=1 00:02:35.648 --no-external' 00:02:35.648 20:01:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:35.648 --rc lcov_branch_coverage=1 00:02:35.648 --rc lcov_function_coverage=1 00:02:35.648 --rc genhtml_branch_coverage=1 00:02:35.648 --rc genhtml_function_coverage=1 00:02:35.648 --rc genhtml_legend=1 00:02:35.648 --rc geninfo_all_blocks=1 00:02:35.648 --no-external' 00:02:35.648 20:01:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:35.905 lcov: LCOV version 1.14 00:02:35.905 20:01:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:37.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:37.799 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:37.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:37.800 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:38.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:38.059 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:38.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:38.059 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:38.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:38.059 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:38.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:38.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:38.060 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:52.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:52.944 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.023 20:01:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.023 20:01:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:11.024 20:01:35 -- common/autotest_common.sh@10 -- # set +x 00:03:11.024 20:01:35 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.024 20:01:35 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.931 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:03:12.931 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:12.931 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.190 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.190 20:01:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:13.190 20:01:38 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:13.190 20:01:38 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:13.190 20:01:38 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:13.190 20:01:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:13.190 20:01:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:13.190 20:01:38 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:13.190 20:01:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.190 20:01:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:13.190 20:01:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:13.190 20:01:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.190 20:01:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:13.190 20:01:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:13.190 20:01:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:13.190 20:01:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.448 No valid GPT data, bailing 00:03:13.448 20:01:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:13.448 20:01:38 -- scripts/common.sh@391 -- # pt= 00:03:13.448 20:01:38 -- scripts/common.sh@392 -- # return 1 00:03:13.448 20:01:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:13.448 1+0 records in 00:03:13.448 1+0 records out 00:03:13.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00192105 s, 546 MB/s 00:03:13.448 20:01:38 -- spdk/autotest.sh@118 -- # sync 00:03:13.448 20:01:38 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:13.448 20:01:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:13.448 20:01:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.717 20:01:43 -- spdk/autotest.sh@124 -- # uname -s 00:03:18.717 20:01:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:18.717 20:01:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.717 20:01:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.717 20:01:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.717 20:01:43 -- common/autotest_common.sh@10 -- # set +x 00:03:18.717 ************************************ 00:03:18.717 START TEST setup.sh 00:03:18.717 ************************************ 00:03:18.717 20:01:43 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.976 * Looking for test storage... 00:03:18.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.976 20:01:44 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:18.976 20:01:44 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:18.976 20:01:44 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.976 20:01:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.976 20:01:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.976 20:01:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.976 ************************************ 00:03:18.976 START TEST acl 00:03:18.976 ************************************ 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.976 * Looking for test storage... 00:03:18.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.976 20:01:44 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.976 20:01:44 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:18.977 20:01:44 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:18.977 20:01:44 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:18.977 20:01:44 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:18.977 20:01:44 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:18.977 20:01:44 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:18.977 20:01:44 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.977 20:01:44 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.261 20:01:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.261 20:01:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:22.261 20:01:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.261 20:01:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:22.261 20:01:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.261 20:01:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:24.163 Hugepages 00:03:24.163 node hugesize free / total 00:03:24.163 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.163 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.163 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.163 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.163 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.163 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 00:03:24.421 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:24.421 20:01:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:24.421 20:01:49 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.421 20:01:49 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.421 20:01:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.421 ************************************ 00:03:24.421 START TEST denied 00:03:24.421 ************************************ 00:03:24.421 20:01:49 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:24.421 20:01:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:03:24.421 20:01:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:24.421 20:01:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.421 20:01:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:03:24.421 20:01:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.706 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:03:27.706 20:01:52 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:03:27.706 20:01:52 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.707 20:01:52 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.896 00:03:31.896 real 0m6.722s 00:03:31.896 user 0m2.090s 00:03:31.896 sys 0m3.800s 00:03:31.896 20:01:56 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.896 20:01:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.896 ************************************ 00:03:31.896 END TEST denied 00:03:31.896 ************************************ 00:03:31.896 20:01:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:31.896 20:01:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.896 20:01:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.896 20:01:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.896 20:01:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.896 ************************************ 00:03:31.896 START TEST allowed 00:03:31.896 ************************************ 00:03:31.896 20:01:56 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:31.896 20:01:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:31.896 20:01:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.896 20:01:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:31.896 20:01:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.896 20:01:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.225 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.225 20:02:00 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:35.225 20:02:00 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:35.225 20:02:00 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:35.225 20:02:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.225 20:02:00 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.773 00:03:37.773 real 0m6.459s 00:03:37.773 user 0m1.848s 00:03:37.773 sys 0m3.544s 00:03:37.773 20:02:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.773 20:02:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:37.773 ************************************ 00:03:37.773 END TEST allowed 00:03:37.773 ************************************ 00:03:37.773 20:02:03 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:37.773 00:03:37.773 real 0m18.903s 00:03:37.773 user 0m5.979s 00:03:37.773 sys 0m11.049s 00:03:37.773 20:02:03 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.773 20:02:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:37.773 ************************************ 00:03:37.773 END TEST acl 00:03:37.773 ************************************ 00:03:37.773 20:02:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.773 20:02:03 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:37.773 20:02:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.773 20:02:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.773 20:02:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.773 ************************************ 00:03:37.773 START TEST hugepages 00:03:37.773 ************************************ 00:03:37.773 20:02:03 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:38.033 * Looking for test storage... 00:03:38.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69519480 kB' 'MemAvailable: 72981924 kB' 'Buffers: 2704 kB' 'Cached: 14543124 kB' 'SwapCached: 0 kB' 'Active: 11700932 kB' 'Inactive: 3529048 kB' 'Active(anon): 11248108 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 687432 kB' 'Mapped: 217844 kB' 'Shmem: 10563956 kB' 'KReclaimable: 267936 kB' 'Slab: 915216 kB' 'SReclaimable: 267936 kB' 'SUnreclaim: 647280 kB' 'KernelStack: 23024 kB' 'PageTables: 9960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 12687688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220264 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.033 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.034 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.035 20:02:03 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:38.035 20:02:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.035 20:02:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.035 20:02:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.035 ************************************ 00:03:38.035 START TEST default_setup 00:03:38.035 ************************************ 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.035 20:02:03 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.567 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:40.567 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:40.823 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.762 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:41.762 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71675332 kB' 'MemAvailable: 75137760 kB' 'Buffers: 2704 kB' 'Cached: 14543224 kB' 'SwapCached: 0 kB' 'Active: 11719352 kB' 'Inactive: 3529048 kB' 'Active(anon): 11266528 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705932 kB' 'Mapped: 218324 kB' 'Shmem: 10564056 kB' 'KReclaimable: 267904 kB' 'Slab: 913708 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645804 kB' 'KernelStack: 23024 kB' 'PageTables: 10252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12709068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.763 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71686200 kB' 'MemAvailable: 75148628 kB' 'Buffers: 2704 kB' 'Cached: 14543228 kB' 'SwapCached: 0 kB' 'Active: 11719444 kB' 'Inactive: 3529048 kB' 'Active(anon): 11266620 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705792 kB' 'Mapped: 217760 kB' 'Shmem: 10564060 kB' 'KReclaimable: 267904 kB' 'Slab: 913708 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645804 kB' 'KernelStack: 23056 kB' 'PageTables: 10556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12707596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220152 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:41.766 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71685568 kB' 'MemAvailable: 75147996 kB' 'Buffers: 2704 kB' 'Cached: 14543244 kB' 'SwapCached: 0 kB' 'Active: 11720756 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267932 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707376 kB' 'Mapped: 217768 kB' 'Shmem: 10564076 kB' 'KReclaimable: 267904 kB' 'Slab: 913704 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645800 kB' 'KernelStack: 23296 kB' 'PageTables: 11328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12709108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.767 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.769 nr_hugepages=1024 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.769 resv_hugepages=0 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.769 surplus_hugepages=0 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.769 anon_hugepages=0 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71681880 kB' 'MemAvailable: 75144308 kB' 'Buffers: 2704 kB' 'Cached: 14543268 kB' 'SwapCached: 0 kB' 'Active: 11719456 kB' 'Inactive: 3529048 kB' 'Active(anon): 11266632 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705952 kB' 'Mapped: 217760 kB' 'Shmem: 10564100 kB' 'KReclaimable: 267904 kB' 'Slab: 913928 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 646024 kB' 'KernelStack: 23136 kB' 'PageTables: 10724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12709132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220152 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40414148 kB' 'MemUsed: 7654248 kB' 'SwapCached: 0 kB' 'Active: 3871020 kB' 'Inactive: 229648 kB' 'Active(anon): 3743988 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942272 kB' 'Mapped: 88240 kB' 'AnonPages: 161608 kB' 'Shmem: 3585592 kB' 'KernelStack: 12216 kB' 'PageTables: 5936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 421900 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 302236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.031 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.032 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.033 node0=1024 expecting 1024 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.033 00:03:42.033 real 0m3.883s 00:03:42.033 user 0m1.213s 00:03:42.033 sys 0m1.854s 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.033 20:02:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:42.033 ************************************ 00:03:42.033 END TEST default_setup 00:03:42.033 ************************************ 00:03:42.033 20:02:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.033 20:02:07 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:42.033 20:02:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.033 20:02:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.033 20:02:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.033 ************************************ 00:03:42.033 START TEST per_node_1G_alloc 00:03:42.033 ************************************ 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.033 20:02:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.565 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.565 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.565 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.829 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71685372 kB' 'MemAvailable: 75147800 kB' 'Buffers: 2704 kB' 'Cached: 14543376 kB' 'SwapCached: 0 kB' 'Active: 11719872 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267048 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705688 kB' 'Mapped: 217868 kB' 'Shmem: 10564208 kB' 'KReclaimable: 267904 kB' 'Slab: 913216 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645312 kB' 'KernelStack: 22944 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12708512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220456 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.830 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71690264 kB' 'MemAvailable: 75152692 kB' 'Buffers: 2704 kB' 'Cached: 14543380 kB' 'SwapCached: 0 kB' 'Active: 11719976 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267152 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706548 kB' 'Mapped: 217776 kB' 'Shmem: 10564212 kB' 'KReclaimable: 267904 kB' 'Slab: 913232 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645328 kB' 'KernelStack: 22960 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12709952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220392 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.831 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.832 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71691488 kB' 'MemAvailable: 75153916 kB' 'Buffers: 2704 kB' 'Cached: 14543400 kB' 'SwapCached: 0 kB' 'Active: 11720092 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267268 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707068 kB' 'Mapped: 217776 kB' 'Shmem: 10564232 kB' 'KReclaimable: 267904 kB' 'Slab: 913200 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645296 kB' 'KernelStack: 22832 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12710156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220360 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.833 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.834 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.835 nr_hugepages=1024 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.835 resv_hugepages=0 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.835 surplus_hugepages=0 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.835 anon_hugepages=0 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71690552 kB' 'MemAvailable: 75152980 kB' 'Buffers: 2704 kB' 'Cached: 14543420 kB' 'SwapCached: 0 kB' 'Active: 11721020 kB' 'Inactive: 3529048 kB' 'Active(anon): 11268196 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 708132 kB' 'Mapped: 217776 kB' 'Shmem: 10564252 kB' 'KReclaimable: 267904 kB' 'Slab: 913040 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645136 kB' 'KernelStack: 23088 kB' 'PageTables: 10428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12709820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220360 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.835 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.836 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41454136 kB' 'MemUsed: 6614260 kB' 'SwapCached: 0 kB' 'Active: 3871384 kB' 'Inactive: 229648 kB' 'Active(anon): 3744352 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942336 kB' 'Mapped: 88256 kB' 'AnonPages: 162140 kB' 'Shmem: 3585656 kB' 'KernelStack: 11576 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 421280 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 301616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.837 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.838 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 30233572 kB' 'MemUsed: 13984636 kB' 'SwapCached: 0 kB' 'Active: 7851180 kB' 'Inactive: 3299400 kB' 'Active(anon): 7525388 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10603812 kB' 'Mapped: 130024 kB' 'AnonPages: 547316 kB' 'Shmem: 6978620 kB' 'KernelStack: 11384 kB' 'PageTables: 5860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148240 kB' 'Slab: 491760 kB' 'SReclaimable: 148240 kB' 'SUnreclaim: 343520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.839 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.840 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.841 node0=512 expecting 512 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:44.841 node1=512 expecting 512 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.841 00:03:44.841 real 0m2.941s 00:03:44.841 user 0m1.177s 00:03:44.841 sys 0m1.799s 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.841 20:02:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.841 ************************************ 00:03:44.841 END TEST per_node_1G_alloc 00:03:44.841 ************************************ 00:03:44.841 20:02:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.841 20:02:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:44.841 20:02:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.841 20:02:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.841 20:02:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.100 ************************************ 00:03:45.100 START TEST even_2G_alloc 00:03:45.100 ************************************ 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.100 20:02:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.648 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.648 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.648 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71686860 kB' 'MemAvailable: 75149288 kB' 'Buffers: 2704 kB' 'Cached: 14543528 kB' 'SwapCached: 0 kB' 'Active: 11717656 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264832 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703240 kB' 'Mapped: 216712 kB' 'Shmem: 10564360 kB' 'KReclaimable: 267904 kB' 'Slab: 912440 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644536 kB' 'KernelStack: 22880 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12695908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220408 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.915 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71691972 kB' 'MemAvailable: 75154400 kB' 'Buffers: 2704 kB' 'Cached: 14543532 kB' 'SwapCached: 0 kB' 'Active: 11716932 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264108 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702976 kB' 'Mapped: 216636 kB' 'Shmem: 10564364 kB' 'KReclaimable: 267904 kB' 'Slab: 912388 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644484 kB' 'KernelStack: 22800 kB' 'PageTables: 9748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12695560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220344 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.916 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.917 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71696336 kB' 'MemAvailable: 75158764 kB' 'Buffers: 2704 kB' 'Cached: 14543548 kB' 'SwapCached: 0 kB' 'Active: 11716260 kB' 'Inactive: 3529048 kB' 'Active(anon): 11263436 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702268 kB' 'Mapped: 216620 kB' 'Shmem: 10564380 kB' 'KReclaimable: 267904 kB' 'Slab: 912260 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644356 kB' 'KernelStack: 22624 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12693096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220264 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.918 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.919 nr_hugepages=1024 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.919 resv_hugepages=0 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.919 surplus_hugepages=0 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.919 anon_hugepages=0 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.919 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71696444 kB' 'MemAvailable: 75158872 kB' 'Buffers: 2704 kB' 'Cached: 14543572 kB' 'SwapCached: 0 kB' 'Active: 11716440 kB' 'Inactive: 3529048 kB' 'Active(anon): 11263616 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702400 kB' 'Mapped: 216620 kB' 'Shmem: 10564404 kB' 'KReclaimable: 267904 kB' 'Slab: 912644 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644740 kB' 'KernelStack: 22736 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12693120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220264 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.920 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41463020 kB' 'MemUsed: 6605376 kB' 'SwapCached: 0 kB' 'Active: 3868768 kB' 'Inactive: 229648 kB' 'Active(anon): 3741736 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942508 kB' 'Mapped: 87852 kB' 'AnonPages: 159028 kB' 'Shmem: 3585828 kB' 'KernelStack: 11560 kB' 'PageTables: 3692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 421052 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 301388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.921 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 30234352 kB' 'MemUsed: 13983856 kB' 'SwapCached: 0 kB' 'Active: 7847332 kB' 'Inactive: 3299400 kB' 'Active(anon): 7521540 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10603808 kB' 'Mapped: 128768 kB' 'AnonPages: 542992 kB' 'Shmem: 6978616 kB' 'KernelStack: 11160 kB' 'PageTables: 5416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148240 kB' 'Slab: 491592 kB' 'SReclaimable: 148240 kB' 'SUnreclaim: 343352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.922 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.923 node0=512 expecting 512 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:47.923 node1=512 expecting 512 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.923 00:03:47.923 real 0m3.058s 00:03:47.923 user 0m1.257s 00:03:47.923 sys 0m1.832s 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.923 20:02:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.923 ************************************ 00:03:47.923 END TEST even_2G_alloc 00:03:47.923 ************************************ 00:03:48.182 20:02:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.182 20:02:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:48.182 20:02:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.182 20:02:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.182 20:02:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.182 ************************************ 00:03:48.182 START TEST odd_alloc 00:03:48.182 ************************************ 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.182 20:02:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.730 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.730 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.730 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.730 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71691136 kB' 'MemAvailable: 75153564 kB' 'Buffers: 2704 kB' 'Cached: 14543688 kB' 'SwapCached: 0 kB' 'Active: 11717684 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264860 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703560 kB' 'Mapped: 216640 kB' 'Shmem: 10564520 kB' 'KReclaimable: 267904 kB' 'Slab: 912120 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644216 kB' 'KernelStack: 22832 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12693860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220264 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.731 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71691920 kB' 'MemAvailable: 75154348 kB' 'Buffers: 2704 kB' 'Cached: 14543688 kB' 'SwapCached: 0 kB' 'Active: 11717072 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264248 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702952 kB' 'Mapped: 216624 kB' 'Shmem: 10564520 kB' 'KReclaimable: 267904 kB' 'Slab: 912268 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644364 kB' 'KernelStack: 22784 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12693876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.732 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.733 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71691840 kB' 'MemAvailable: 75154268 kB' 'Buffers: 2704 kB' 'Cached: 14543708 kB' 'SwapCached: 0 kB' 'Active: 11717016 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264192 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702920 kB' 'Mapped: 216632 kB' 'Shmem: 10564540 kB' 'KReclaimable: 267904 kB' 'Slab: 912380 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644476 kB' 'KernelStack: 22736 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12693896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.999 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.000 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:51.001 nr_hugepages=1025 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.001 resv_hugepages=0 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.001 surplus_hugepages=0 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.001 anon_hugepages=0 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71692248 kB' 'MemAvailable: 75154676 kB' 'Buffers: 2704 kB' 'Cached: 14543728 kB' 'SwapCached: 0 kB' 'Active: 11717076 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264252 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 702972 kB' 'Mapped: 216632 kB' 'Shmem: 10564560 kB' 'KReclaimable: 267904 kB' 'Slab: 912380 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 644476 kB' 'KernelStack: 22752 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12693916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.001 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.002 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41453876 kB' 'MemUsed: 6614520 kB' 'SwapCached: 0 kB' 'Active: 3867612 kB' 'Inactive: 229648 kB' 'Active(anon): 3740580 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942592 kB' 'Mapped: 87864 kB' 'AnonPages: 157800 kB' 'Shmem: 3585912 kB' 'KernelStack: 11544 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 420984 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 301320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.003 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.004 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 30238940 kB' 'MemUsed: 13979268 kB' 'SwapCached: 0 kB' 'Active: 7849188 kB' 'Inactive: 3299400 kB' 'Active(anon): 7523396 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10603844 kB' 'Mapped: 128768 kB' 'AnonPages: 544892 kB' 'Shmem: 6978652 kB' 'KernelStack: 11208 kB' 'PageTables: 5512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148240 kB' 'Slab: 491396 kB' 'SReclaimable: 148240 kB' 'SUnreclaim: 343156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.005 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:51.006 node0=512 expecting 513 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:51.006 node1=513 expecting 512 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:51.006 00:03:51.006 real 0m2.922s 00:03:51.006 user 0m1.187s 00:03:51.006 sys 0m1.766s 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.006 20:02:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.006 ************************************ 00:03:51.006 END TEST odd_alloc 00:03:51.006 ************************************ 00:03:51.006 20:02:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:51.006 20:02:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:51.006 20:02:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.006 20:02:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.007 20:02:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.007 ************************************ 00:03:51.007 START TEST custom_alloc 00:03:51.007 ************************************ 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.007 20:02:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.541 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:53.541 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.541 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:53.542 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70655436 kB' 'MemAvailable: 74117864 kB' 'Buffers: 2704 kB' 'Cached: 14543840 kB' 'SwapCached: 0 kB' 'Active: 11718224 kB' 'Inactive: 3529048 kB' 'Active(anon): 11265400 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703916 kB' 'Mapped: 216676 kB' 'Shmem: 10564672 kB' 'KReclaimable: 267904 kB' 'Slab: 913472 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645568 kB' 'KernelStack: 22800 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12697264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220488 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.542 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70660276 kB' 'MemAvailable: 74122704 kB' 'Buffers: 2704 kB' 'Cached: 14543840 kB' 'SwapCached: 0 kB' 'Active: 11719040 kB' 'Inactive: 3529048 kB' 'Active(anon): 11266216 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704628 kB' 'Mapped: 217164 kB' 'Shmem: 10564672 kB' 'KReclaimable: 267904 kB' 'Slab: 913484 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645580 kB' 'KernelStack: 22816 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12696760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220440 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.543 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.544 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70656952 kB' 'MemAvailable: 74119380 kB' 'Buffers: 2704 kB' 'Cached: 14543860 kB' 'SwapCached: 0 kB' 'Active: 11723532 kB' 'Inactive: 3529048 kB' 'Active(anon): 11270708 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 709704 kB' 'Mapped: 217164 kB' 'Shmem: 10564692 kB' 'KReclaimable: 267904 kB' 'Slab: 913476 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645572 kB' 'KernelStack: 22992 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12701808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220476 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.545 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.546 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:53.547 nr_hugepages=1536 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.547 resv_hugepages=0 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.547 surplus_hugepages=0 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.547 anon_hugepages=0 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70664140 kB' 'MemAvailable: 74126568 kB' 'Buffers: 2704 kB' 'Cached: 14543884 kB' 'SwapCached: 0 kB' 'Active: 11718152 kB' 'Inactive: 3529048 kB' 'Active(anon): 11265328 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703780 kB' 'Mapped: 217056 kB' 'Shmem: 10564716 kB' 'KReclaimable: 267904 kB' 'Slab: 913476 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645572 kB' 'KernelStack: 22896 kB' 'PageTables: 9924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12695712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220440 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.547 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.548 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41458072 kB' 'MemUsed: 6610324 kB' 'SwapCached: 0 kB' 'Active: 3868116 kB' 'Inactive: 229648 kB' 'Active(anon): 3741084 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942656 kB' 'Mapped: 87892 kB' 'AnonPages: 158192 kB' 'Shmem: 3585976 kB' 'KernelStack: 11752 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 421596 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 301932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.549 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.810 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29204692 kB' 'MemUsed: 15013516 kB' 'SwapCached: 0 kB' 'Active: 7850652 kB' 'Inactive: 3299400 kB' 'Active(anon): 7524860 kB' 'Inactive(anon): 0 kB' 'Active(file): 325792 kB' 'Inactive(file): 3299400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10603972 kB' 'Mapped: 128768 kB' 'AnonPages: 546256 kB' 'Shmem: 6978780 kB' 'KernelStack: 11240 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148240 kB' 'Slab: 491848 kB' 'SReclaimable: 148240 kB' 'SUnreclaim: 343608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.811 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.812 node0=512 expecting 512 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:53.812 node1=1024 expecting 1024 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:53.812 00:03:53.812 real 0m2.627s 00:03:53.812 user 0m0.993s 00:03:53.812 sys 0m1.630s 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.812 20:02:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.812 ************************************ 00:03:53.812 END TEST custom_alloc 00:03:53.812 ************************************ 00:03:53.812 20:02:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.812 20:02:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:53.812 20:02:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.812 20:02:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.812 20:02:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.812 ************************************ 00:03:53.812 START TEST no_shrink_alloc 00:03:53.812 ************************************ 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.812 20:02:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.812 20:02:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.349 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.349 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.349 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71699528 kB' 'MemAvailable: 75161956 kB' 'Buffers: 2704 kB' 'Cached: 14544004 kB' 'SwapCached: 0 kB' 'Active: 11719220 kB' 'Inactive: 3529048 kB' 'Active(anon): 11266396 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704592 kB' 'Mapped: 216676 kB' 'Shmem: 10564836 kB' 'KReclaimable: 267904 kB' 'Slab: 913236 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645332 kB' 'KernelStack: 22816 kB' 'PageTables: 9864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12698280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220328 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.349 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.350 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71700432 kB' 'MemAvailable: 75162860 kB' 'Buffers: 2704 kB' 'Cached: 14544008 kB' 'SwapCached: 0 kB' 'Active: 11717740 kB' 'Inactive: 3529048 kB' 'Active(anon): 11264916 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703108 kB' 'Mapped: 216668 kB' 'Shmem: 10564840 kB' 'KReclaimable: 267904 kB' 'Slab: 913276 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645372 kB' 'KernelStack: 22800 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12698300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220408 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.351 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.352 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.615 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71699704 kB' 'MemAvailable: 75162132 kB' 'Buffers: 2704 kB' 'Cached: 14544020 kB' 'SwapCached: 0 kB' 'Active: 11718184 kB' 'Inactive: 3529048 kB' 'Active(anon): 11265360 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 703724 kB' 'Mapped: 216668 kB' 'Shmem: 10564852 kB' 'KReclaimable: 267904 kB' 'Slab: 913312 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645408 kB' 'KernelStack: 22896 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12698320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220360 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.616 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.617 nr_hugepages=1024 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.617 resv_hugepages=0 00:03:56.617 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.618 surplus_hugepages=0 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.618 anon_hugepages=0 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71697608 kB' 'MemAvailable: 75160036 kB' 'Buffers: 2704 kB' 'Cached: 14544048 kB' 'SwapCached: 0 kB' 'Active: 11718584 kB' 'Inactive: 3529048 kB' 'Active(anon): 11265760 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704112 kB' 'Mapped: 216660 kB' 'Shmem: 10564880 kB' 'KReclaimable: 267904 kB' 'Slab: 913312 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645408 kB' 'KernelStack: 22960 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12698344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220440 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.618 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.619 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40423024 kB' 'MemUsed: 7645372 kB' 'SwapCached: 0 kB' 'Active: 3870268 kB' 'Inactive: 229648 kB' 'Active(anon): 3743236 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942696 kB' 'Mapped: 87908 kB' 'AnonPages: 160448 kB' 'Shmem: 3586016 kB' 'KernelStack: 11944 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 421524 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 301860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.620 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.621 node0=1024 expecting 1024 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:56.621 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.622 20:02:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.153 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.153 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:59.153 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:59.417 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.417 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71692944 kB' 'MemAvailable: 75155372 kB' 'Buffers: 2704 kB' 'Cached: 14544132 kB' 'SwapCached: 0 kB' 'Active: 11719792 kB' 'Inactive: 3529048 kB' 'Active(anon): 11266968 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704764 kB' 'Mapped: 216756 kB' 'Shmem: 10564964 kB' 'KReclaimable: 267904 kB' 'Slab: 913596 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645692 kB' 'KernelStack: 23296 kB' 'PageTables: 10832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12698960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220456 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.418 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71690436 kB' 'MemAvailable: 75152864 kB' 'Buffers: 2704 kB' 'Cached: 14544144 kB' 'SwapCached: 0 kB' 'Active: 11719864 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267040 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 704928 kB' 'Mapped: 216756 kB' 'Shmem: 10564976 kB' 'KReclaimable: 267904 kB' 'Slab: 913532 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645628 kB' 'KernelStack: 23296 kB' 'PageTables: 10300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12698976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220408 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.419 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.420 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71689456 kB' 'MemAvailable: 75151884 kB' 'Buffers: 2704 kB' 'Cached: 14544144 kB' 'SwapCached: 0 kB' 'Active: 11719968 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267144 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705052 kB' 'Mapped: 216756 kB' 'Shmem: 10564976 kB' 'KReclaimable: 267904 kB' 'Slab: 913532 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645628 kB' 'KernelStack: 23264 kB' 'PageTables: 10828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12699000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220408 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.421 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.422 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.423 nr_hugepages=1024 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.423 resv_hugepages=0 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.423 surplus_hugepages=0 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.423 anon_hugepages=0 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71693188 kB' 'MemAvailable: 75155616 kB' 'Buffers: 2704 kB' 'Cached: 14544148 kB' 'SwapCached: 0 kB' 'Active: 11719964 kB' 'Inactive: 3529048 kB' 'Active(anon): 11267140 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3529048 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705024 kB' 'Mapped: 216756 kB' 'Shmem: 10564980 kB' 'KReclaimable: 267904 kB' 'Slab: 913532 kB' 'SReclaimable: 267904 kB' 'SUnreclaim: 645628 kB' 'KernelStack: 22976 kB' 'PageTables: 10460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12696156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220328 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3972052 kB' 'DirectMap2M: 30310400 kB' 'DirectMap1G: 67108864 kB' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.423 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.424 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40434008 kB' 'MemUsed: 7634388 kB' 'SwapCached: 0 kB' 'Active: 3868284 kB' 'Inactive: 229648 kB' 'Active(anon): 3741252 kB' 'Inactive(anon): 0 kB' 'Active(file): 127032 kB' 'Inactive(file): 229648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3942712 kB' 'Mapped: 87896 kB' 'AnonPages: 158348 kB' 'Shmem: 3586032 kB' 'KernelStack: 11560 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 119664 kB' 'Slab: 421556 kB' 'SReclaimable: 119664 kB' 'SUnreclaim: 301892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.425 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.426 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.427 node0=1024 expecting 1024 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.427 00:03:59.427 real 0m5.762s 00:03:59.427 user 0m2.301s 00:03:59.427 sys 0m3.489s 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.427 20:02:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.427 ************************************ 00:03:59.427 END TEST no_shrink_alloc 00:03:59.427 ************************************ 00:03:59.686 20:02:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.686 20:02:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.686 00:03:59.686 real 0m21.707s 00:03:59.686 user 0m8.328s 00:03:59.686 sys 0m12.718s 00:03:59.686 20:02:24 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.686 20:02:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.686 ************************************ 00:03:59.686 END TEST hugepages 00:03:59.686 ************************************ 00:03:59.686 20:02:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.686 20:02:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:59.686 20:02:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.686 20:02:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.686 20:02:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.686 ************************************ 00:03:59.686 START TEST driver 00:03:59.686 ************************************ 00:03:59.686 20:02:24 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:59.686 * Looking for test storage... 00:03:59.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.686 20:02:24 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:59.686 20:02:24 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.686 20:02:24 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.872 20:02:28 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:03.872 20:02:28 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.872 20:02:28 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.872 20:02:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:03.872 ************************************ 00:04:03.872 START TEST guess_driver 00:04:03.872 ************************************ 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:03.872 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:03.873 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:03.873 Looking for driver=vfio-pci 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.873 20:02:28 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.403 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.404 20:02:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.442 20:02:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.630 00:04:11.630 real 0m7.891s 00:04:11.630 user 0m2.258s 00:04:11.630 sys 0m3.991s 00:04:11.630 20:02:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.630 20:02:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:11.630 ************************************ 00:04:11.630 END TEST guess_driver 00:04:11.630 ************************************ 00:04:11.630 20:02:36 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:11.630 00:04:11.630 real 0m11.893s 00:04:11.630 user 0m3.337s 00:04:11.630 sys 0m6.060s 00:04:11.630 20:02:36 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.630 20:02:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:11.630 ************************************ 00:04:11.630 END TEST driver 00:04:11.630 ************************************ 00:04:11.630 20:02:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:11.630 20:02:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:11.630 20:02:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.630 20:02:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.630 20:02:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.630 ************************************ 00:04:11.630 START TEST devices 00:04:11.630 ************************************ 00:04:11.630 20:02:36 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:11.630 * Looking for test storage... 00:04:11.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:11.630 20:02:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:11.630 20:02:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:11.630 20:02:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.630 20:02:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.921 20:02:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:14.921 20:02:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:14.921 No valid GPT data, bailing 00:04:14.921 20:02:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:14.921 20:02:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:14.921 20:02:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:14.921 20:02:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.921 20:02:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:14.921 ************************************ 00:04:14.921 START TEST nvme_mount 00:04:14.921 ************************************ 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:14.921 20:02:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:15.858 Creating new GPT entries in memory. 00:04:15.858 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.858 other utilities. 00:04:15.858 20:02:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.858 20:02:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.858 20:02:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.858 20:02:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.858 20:02:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:17.237 Creating new GPT entries in memory. 00:04:17.237 The operation has completed successfully. 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4017848 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.237 20:02:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.772 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:19.773 20:02:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:19.773 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:19.773 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.033 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:20.033 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:20.033 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.033 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.033 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:20.033 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:20.033 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.033 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.033 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.293 20:02:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:22.831 20:02:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.831 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.831 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.832 20:02:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.369 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.369 00:04:25.369 real 0m10.294s 00:04:25.369 user 0m2.772s 00:04:25.369 sys 0m5.126s 00:04:25.369 20:02:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.370 20:02:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:25.370 ************************************ 00:04:25.370 END TEST nvme_mount 00:04:25.370 ************************************ 00:04:25.370 20:02:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:25.370 20:02:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:25.370 20:02:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.370 20:02:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.370 20:02:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.370 ************************************ 00:04:25.370 START TEST dm_mount 00:04:25.370 ************************************ 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.370 20:02:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:26.307 Creating new GPT entries in memory. 00:04:26.307 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.307 other utilities. 00:04:26.307 20:02:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.307 20:02:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.307 20:02:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.307 20:02:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.307 20:02:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.245 Creating new GPT entries in memory. 00:04:27.245 The operation has completed successfully. 00:04:27.245 20:02:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.245 20:02:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.245 20:02:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.245 20:02:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.245 20:02:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:28.623 The operation has completed successfully. 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4022030 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.623 20:02:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:31.157 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.158 20:02:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.690 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:33.691 20:02:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.949 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.949 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.949 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:33.949 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:33.950 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:33.950 00:04:33.950 real 0m8.709s 00:04:33.950 user 0m2.035s 00:04:33.950 sys 0m3.673s 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.950 20:02:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.950 ************************************ 00:04:33.950 END TEST dm_mount 00:04:33.950 ************************************ 00:04:33.950 20:02:59 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.950 20:02:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.209 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:34.209 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:34.209 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.209 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.209 20:02:59 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:34.209 00:04:34.209 real 0m22.672s 00:04:34.209 user 0m6.010s 00:04:34.209 sys 0m11.065s 00:04:34.209 20:02:59 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.209 20:02:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:34.209 ************************************ 00:04:34.209 END TEST devices 00:04:34.209 ************************************ 00:04:34.209 20:02:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:34.209 00:04:34.209 real 1m15.556s 00:04:34.209 user 0m23.806s 00:04:34.209 sys 0m41.150s 00:04:34.209 20:02:59 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.209 20:02:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.209 ************************************ 00:04:34.209 END TEST setup.sh 00:04:34.209 ************************************ 00:04:34.468 20:02:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.468 20:02:59 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:37.005 Hugepages 00:04:37.005 node hugesize free / total 00:04:37.005 node0 1048576kB 0 / 0 00:04:37.005 node0 2048kB 2048 / 2048 00:04:37.005 node1 1048576kB 0 / 0 00:04:37.005 node1 2048kB 0 / 0 00:04:37.005 00:04:37.005 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.005 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:37.005 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:37.005 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:37.005 20:03:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:37.005 20:03:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:37.005 20:03:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:37.005 20:03:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.290 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.290 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.858 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.116 20:03:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:42.052 20:03:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:42.052 20:03:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:42.052 20:03:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:42.052 20:03:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:42.052 20:03:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:42.052 20:03:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:42.052 20:03:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.052 20:03:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:42.052 20:03:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:42.052 20:03:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:42.052 20:03:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:42.052 20:03:07 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.586 Waiting for block devices as requested 00:04:44.845 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:44.845 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:44.845 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:45.103 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:45.103 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:45.103 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:45.361 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:45.361 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:45.361 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:45.361 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:45.618 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:45.618 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:45.618 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:45.877 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:45.877 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:45.877 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:46.136 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:46.136 20:03:11 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:46.136 20:03:11 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1502 -- # grep 0000:86:00.0/nvme/nvme 00:04:46.136 20:03:11 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:46.136 20:03:11 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:46.136 20:03:11 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:46.136 20:03:11 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:46.136 20:03:11 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:46.136 20:03:11 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:46.136 20:03:11 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:46.136 20:03:11 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:46.136 20:03:11 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:46.136 20:03:11 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:46.136 20:03:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:46.136 20:03:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:46.136 20:03:11 -- common/autotest_common.sh@1557 -- # continue 00:04:46.136 20:03:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:46.136 20:03:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:46.136 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:04:46.136 20:03:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:46.136 20:03:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.136 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:04:46.136 20:03:11 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.428 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:49.428 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.995 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.995 20:03:15 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:49.995 20:03:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.995 20:03:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.995 20:03:15 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:49.995 20:03:15 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:49.995 20:03:15 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.995 20:03:15 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:49.995 20:03:15 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:49.995 20:03:15 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:49.995 20:03:15 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:49.995 20:03:15 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:49.995 20:03:15 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.995 20:03:15 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.995 20:03:15 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:49.995 20:03:15 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:49.995 20:03:15 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:49.995 20:03:15 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:49.995 20:03:15 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:49.995 20:03:15 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:49.995 20:03:15 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:49.995 20:03:15 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:49.995 20:03:15 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:86:00.0 00:04:50.253 20:03:15 -- common/autotest_common.sh@1592 -- # [[ -z 0000:86:00.0 ]] 00:04:50.253 20:03:15 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.253 20:03:15 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4031842 00:04:50.253 20:03:15 -- common/autotest_common.sh@1598 -- # waitforlisten 4031842 00:04:50.253 20:03:15 -- common/autotest_common.sh@829 -- # '[' -z 4031842 ']' 00:04:50.253 20:03:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.253 20:03:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.253 20:03:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.253 20:03:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.253 20:03:15 -- common/autotest_common.sh@10 -- # set +x 00:04:50.253 [2024-07-15 20:03:15.396753] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:04:50.253 [2024-07-15 20:03:15.396813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031842 ] 00:04:50.253 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.253 [2024-07-15 20:03:15.478044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.253 [2024-07-15 20:03:15.570199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.511 20:03:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.511 20:03:15 -- common/autotest_common.sh@862 -- # return 0 00:04:50.511 20:03:15 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:50.511 20:03:15 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:50.511 20:03:15 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:04:53.795 nvme0n1 00:04:53.795 20:03:18 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:53.795 [2024-07-15 20:03:19.001848] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:53.795 request: 00:04:53.795 { 00:04:53.795 "nvme_ctrlr_name": "nvme0", 00:04:53.795 "password": "test", 00:04:53.795 "method": "bdev_nvme_opal_revert", 00:04:53.795 "req_id": 1 00:04:53.795 } 00:04:53.795 Got JSON-RPC error response 00:04:53.795 response: 00:04:53.795 { 00:04:53.795 "code": -32602, 00:04:53.795 "message": "Invalid parameters" 00:04:53.795 } 00:04:53.795 20:03:19 -- common/autotest_common.sh@1604 -- # true 00:04:53.795 20:03:19 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:53.795 20:03:19 -- common/autotest_common.sh@1608 -- # killprocess 4031842 00:04:53.795 20:03:19 -- common/autotest_common.sh@948 -- # '[' -z 4031842 ']' 00:04:53.795 20:03:19 -- common/autotest_common.sh@952 -- # kill -0 4031842 00:04:53.795 20:03:19 -- common/autotest_common.sh@953 -- # uname 00:04:53.795 20:03:19 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.795 20:03:19 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4031842 00:04:53.795 20:03:19 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.795 20:03:19 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.795 20:03:19 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4031842' 00:04:53.795 killing process with pid 4031842 00:04:53.795 20:03:19 -- common/autotest_common.sh@967 -- # kill 4031842 00:04:53.795 20:03:19 -- common/autotest_common.sh@972 -- # wait 4031842 00:04:55.795 20:03:20 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:55.795 20:03:20 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:55.795 20:03:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:55.795 20:03:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:55.795 20:03:20 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:55.795 20:03:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.795 20:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.795 20:03:20 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:55.795 20:03:20 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:55.795 20:03:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.795 20:03:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.796 20:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.796 ************************************ 00:04:55.796 START TEST env 00:04:55.796 ************************************ 00:04:55.796 20:03:20 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:55.796 * Looking for test storage... 00:04:55.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:55.796 20:03:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:55.796 20:03:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.796 20:03:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.796 20:03:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.796 ************************************ 00:04:55.796 START TEST env_memory 00:04:55.796 ************************************ 00:04:55.796 20:03:20 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:55.796 00:04:55.796 00:04:55.796 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.796 http://cunit.sourceforge.net/ 00:04:55.796 00:04:55.796 00:04:55.796 Suite: memory 00:04:55.796 Test: alloc and free memory map ...[2024-07-15 20:03:20.961909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:55.796 passed 00:04:55.796 Test: mem map translation ...[2024-07-15 20:03:20.991046] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:55.796 [2024-07-15 20:03:20.991067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:55.796 [2024-07-15 20:03:20.991123] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:55.796 [2024-07-15 20:03:20.991136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:55.796 passed 00:04:55.796 Test: mem map registration ...[2024-07-15 20:03:21.050989] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:55.796 [2024-07-15 20:03:21.051007] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:55.796 passed 00:04:55.796 Test: mem map adjacent registrations ...passed 00:04:55.796 00:04:55.796 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.796 suites 1 1 n/a 0 0 00:04:55.796 tests 4 4 4 0 0 00:04:55.796 asserts 152 152 152 0 n/a 00:04:55.796 00:04:55.796 Elapsed time = 0.203 seconds 00:04:55.796 00:04:55.796 real 0m0.216s 00:04:55.796 user 0m0.205s 00:04:55.796 sys 0m0.011s 00:04:55.796 20:03:21 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.796 20:03:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:55.796 ************************************ 00:04:55.796 END TEST env_memory 00:04:55.796 ************************************ 00:04:56.056 20:03:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:56.056 20:03:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.056 20:03:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.056 20:03:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.056 20:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.056 ************************************ 00:04:56.056 START TEST env_vtophys 00:04:56.056 ************************************ 00:04:56.056 20:03:21 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:56.056 EAL: lib.eal log level changed from notice to debug 00:04:56.056 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.056 EAL: Detected lcore 1 as core 1 on socket 0 00:04:56.056 EAL: Detected lcore 2 as core 2 on socket 0 00:04:56.056 EAL: Detected lcore 3 as core 3 on socket 0 00:04:56.056 EAL: Detected lcore 4 as core 4 on socket 0 00:04:56.056 EAL: Detected lcore 5 as core 5 on socket 0 00:04:56.056 EAL: Detected lcore 6 as core 6 on socket 0 00:04:56.056 EAL: Detected lcore 7 as core 8 on socket 0 00:04:56.056 EAL: Detected lcore 8 as core 9 on socket 0 00:04:56.056 EAL: Detected lcore 9 as core 10 on socket 0 00:04:56.056 EAL: Detected lcore 10 as core 11 on socket 0 00:04:56.056 EAL: Detected lcore 11 as core 12 on socket 0 00:04:56.056 EAL: Detected lcore 12 as core 13 on socket 0 00:04:56.056 EAL: Detected lcore 13 as core 14 on socket 0 00:04:56.056 EAL: Detected lcore 14 as core 16 on socket 0 00:04:56.056 EAL: Detected lcore 15 as core 17 on socket 0 00:04:56.056 EAL: Detected lcore 16 as core 18 on socket 0 00:04:56.056 EAL: Detected lcore 17 as core 19 on socket 0 00:04:56.056 EAL: Detected lcore 18 as core 20 on socket 0 00:04:56.056 EAL: Detected lcore 19 as core 21 on socket 0 00:04:56.056 EAL: Detected lcore 20 as core 22 on socket 0 00:04:56.056 EAL: Detected lcore 21 as core 24 on socket 0 00:04:56.056 EAL: Detected lcore 22 as core 25 on socket 0 00:04:56.056 EAL: Detected lcore 23 as core 26 on socket 0 00:04:56.056 EAL: Detected lcore 24 as core 27 on socket 0 00:04:56.056 EAL: Detected lcore 25 as core 28 on socket 0 00:04:56.056 EAL: Detected lcore 26 as core 29 on socket 0 00:04:56.056 EAL: Detected lcore 27 as core 30 on socket 0 00:04:56.056 EAL: Detected lcore 28 as core 0 on socket 1 00:04:56.056 EAL: Detected lcore 29 as core 1 on socket 1 00:04:56.056 EAL: Detected lcore 30 as core 2 on socket 1 00:04:56.056 EAL: Detected lcore 31 as core 3 on socket 1 00:04:56.056 EAL: Detected lcore 32 as core 4 on socket 1 00:04:56.056 EAL: Detected lcore 33 as core 5 on socket 1 00:04:56.056 EAL: Detected lcore 34 as core 6 on socket 1 00:04:56.056 EAL: Detected lcore 35 as core 8 on socket 1 00:04:56.056 EAL: Detected lcore 36 as core 9 on socket 1 00:04:56.056 EAL: Detected lcore 37 as core 10 on socket 1 00:04:56.056 EAL: Detected lcore 38 as core 11 on socket 1 00:04:56.056 EAL: Detected lcore 39 as core 12 on socket 1 00:04:56.056 EAL: Detected lcore 40 as core 13 on socket 1 00:04:56.056 EAL: Detected lcore 41 as core 14 on socket 1 00:04:56.056 EAL: Detected lcore 42 as core 16 on socket 1 00:04:56.056 EAL: Detected lcore 43 as core 17 on socket 1 00:04:56.056 EAL: Detected lcore 44 as core 18 on socket 1 00:04:56.056 EAL: Detected lcore 45 as core 19 on socket 1 00:04:56.056 EAL: Detected lcore 46 as core 20 on socket 1 00:04:56.056 EAL: Detected lcore 47 as core 21 on socket 1 00:04:56.056 EAL: Detected lcore 48 as core 22 on socket 1 00:04:56.056 EAL: Detected lcore 49 as core 24 on socket 1 00:04:56.056 EAL: Detected lcore 50 as core 25 on socket 1 00:04:56.056 EAL: Detected lcore 51 as core 26 on socket 1 00:04:56.056 EAL: Detected lcore 52 as core 27 on socket 1 00:04:56.056 EAL: Detected lcore 53 as core 28 on socket 1 00:04:56.056 EAL: Detected lcore 54 as core 29 on socket 1 00:04:56.056 EAL: Detected lcore 55 as core 30 on socket 1 00:04:56.056 EAL: Detected lcore 56 as core 0 on socket 0 00:04:56.056 EAL: Detected lcore 57 as core 1 on socket 0 00:04:56.056 EAL: Detected lcore 58 as core 2 on socket 0 00:04:56.056 EAL: Detected lcore 59 as core 3 on socket 0 00:04:56.056 EAL: Detected lcore 60 as core 4 on socket 0 00:04:56.056 EAL: Detected lcore 61 as core 5 on socket 0 00:04:56.056 EAL: Detected lcore 62 as core 6 on socket 0 00:04:56.056 EAL: Detected lcore 63 as core 8 on socket 0 00:04:56.056 EAL: Detected lcore 64 as core 9 on socket 0 00:04:56.056 EAL: Detected lcore 65 as core 10 on socket 0 00:04:56.056 EAL: Detected lcore 66 as core 11 on socket 0 00:04:56.056 EAL: Detected lcore 67 as core 12 on socket 0 00:04:56.056 EAL: Detected lcore 68 as core 13 on socket 0 00:04:56.056 EAL: Detected lcore 69 as core 14 on socket 0 00:04:56.056 EAL: Detected lcore 70 as core 16 on socket 0 00:04:56.056 EAL: Detected lcore 71 as core 17 on socket 0 00:04:56.056 EAL: Detected lcore 72 as core 18 on socket 0 00:04:56.056 EAL: Detected lcore 73 as core 19 on socket 0 00:04:56.056 EAL: Detected lcore 74 as core 20 on socket 0 00:04:56.056 EAL: Detected lcore 75 as core 21 on socket 0 00:04:56.056 EAL: Detected lcore 76 as core 22 on socket 0 00:04:56.056 EAL: Detected lcore 77 as core 24 on socket 0 00:04:56.056 EAL: Detected lcore 78 as core 25 on socket 0 00:04:56.056 EAL: Detected lcore 79 as core 26 on socket 0 00:04:56.056 EAL: Detected lcore 80 as core 27 on socket 0 00:04:56.056 EAL: Detected lcore 81 as core 28 on socket 0 00:04:56.056 EAL: Detected lcore 82 as core 29 on socket 0 00:04:56.057 EAL: Detected lcore 83 as core 30 on socket 0 00:04:56.057 EAL: Detected lcore 84 as core 0 on socket 1 00:04:56.057 EAL: Detected lcore 85 as core 1 on socket 1 00:04:56.057 EAL: Detected lcore 86 as core 2 on socket 1 00:04:56.057 EAL: Detected lcore 87 as core 3 on socket 1 00:04:56.057 EAL: Detected lcore 88 as core 4 on socket 1 00:04:56.057 EAL: Detected lcore 89 as core 5 on socket 1 00:04:56.057 EAL: Detected lcore 90 as core 6 on socket 1 00:04:56.057 EAL: Detected lcore 91 as core 8 on socket 1 00:04:56.057 EAL: Detected lcore 92 as core 9 on socket 1 00:04:56.057 EAL: Detected lcore 93 as core 10 on socket 1 00:04:56.057 EAL: Detected lcore 94 as core 11 on socket 1 00:04:56.057 EAL: Detected lcore 95 as core 12 on socket 1 00:04:56.057 EAL: Detected lcore 96 as core 13 on socket 1 00:04:56.057 EAL: Detected lcore 97 as core 14 on socket 1 00:04:56.057 EAL: Detected lcore 98 as core 16 on socket 1 00:04:56.057 EAL: Detected lcore 99 as core 17 on socket 1 00:04:56.057 EAL: Detected lcore 100 as core 18 on socket 1 00:04:56.057 EAL: Detected lcore 101 as core 19 on socket 1 00:04:56.057 EAL: Detected lcore 102 as core 20 on socket 1 00:04:56.057 EAL: Detected lcore 103 as core 21 on socket 1 00:04:56.057 EAL: Detected lcore 104 as core 22 on socket 1 00:04:56.057 EAL: Detected lcore 105 as core 24 on socket 1 00:04:56.057 EAL: Detected lcore 106 as core 25 on socket 1 00:04:56.057 EAL: Detected lcore 107 as core 26 on socket 1 00:04:56.057 EAL: Detected lcore 108 as core 27 on socket 1 00:04:56.057 EAL: Detected lcore 109 as core 28 on socket 1 00:04:56.057 EAL: Detected lcore 110 as core 29 on socket 1 00:04:56.057 EAL: Detected lcore 111 as core 30 on socket 1 00:04:56.057 EAL: Maximum logical cores by configuration: 128 00:04:56.057 EAL: Detected CPU lcores: 112 00:04:56.057 EAL: Detected NUMA nodes: 2 00:04:56.057 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:56.057 EAL: Detected shared linkage of DPDK 00:04:56.057 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.057 EAL: Bus pci wants IOVA as 'DC' 00:04:56.057 EAL: Buses did not request a specific IOVA mode. 00:04:56.057 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:56.057 EAL: Selected IOVA mode 'VA' 00:04:56.057 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.057 EAL: Probing VFIO support... 00:04:56.057 EAL: IOMMU type 1 (Type 1) is supported 00:04:56.057 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:56.057 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:56.057 EAL: VFIO support initialized 00:04:56.057 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.057 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.057 EAL: Setting up physically contiguous memory... 00:04:56.057 EAL: Setting maximum number of open files to 524288 00:04:56.057 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.057 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:56.057 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.057 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:56.057 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.057 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:56.057 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:56.057 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.057 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:56.057 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:56.057 EAL: Hugepages will be freed exactly as allocated. 00:04:56.057 EAL: No shared files mode enabled, IPC is disabled 00:04:56.057 EAL: No shared files mode enabled, IPC is disabled 00:04:56.057 EAL: TSC frequency is ~2200000 KHz 00:04:56.057 EAL: Main lcore 0 is ready (tid=7f9d45cd3a00;cpuset=[0]) 00:04:56.057 EAL: Trying to obtain current memory policy. 00:04:56.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.057 EAL: Restoring previous memory policy: 0 00:04:56.057 EAL: request: mp_malloc_sync 00:04:56.057 EAL: No shared files mode enabled, IPC is disabled 00:04:56.057 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.057 EAL: No shared files mode enabled, IPC is disabled 00:04:56.057 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.057 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.057 00:04:56.057 00:04:56.057 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.057 http://cunit.sourceforge.net/ 00:04:56.057 00:04:56.057 00:04:56.057 Suite: components_suite 00:04:56.057 Test: vtophys_malloc_test ...passed 00:04:56.057 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:56.057 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.057 EAL: Restoring previous memory policy: 4 00:04:56.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.057 EAL: request: mp_malloc_sync 00:04:56.057 EAL: No shared files mode enabled, IPC is disabled 00:04:56.057 EAL: Heap on socket 0 was expanded by 4MB 00:04:56.057 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.057 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was shrunk by 4MB 00:04:56.058 EAL: Trying to obtain current memory policy. 00:04:56.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.058 EAL: Restoring previous memory policy: 4 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was expanded by 6MB 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was shrunk by 6MB 00:04:56.058 EAL: Trying to obtain current memory policy. 00:04:56.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.058 EAL: Restoring previous memory policy: 4 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was expanded by 10MB 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was shrunk by 10MB 00:04:56.058 EAL: Trying to obtain current memory policy. 00:04:56.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.058 EAL: Restoring previous memory policy: 4 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was expanded by 18MB 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was shrunk by 18MB 00:04:56.058 EAL: Trying to obtain current memory policy. 00:04:56.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.058 EAL: Restoring previous memory policy: 4 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was expanded by 34MB 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was shrunk by 34MB 00:04:56.058 EAL: Trying to obtain current memory policy. 00:04:56.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.058 EAL: Restoring previous memory policy: 4 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was expanded by 66MB 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was shrunk by 66MB 00:04:56.058 EAL: Trying to obtain current memory policy. 00:04:56.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.058 EAL: Restoring previous memory policy: 4 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.058 EAL: request: mp_malloc_sync 00:04:56.058 EAL: No shared files mode enabled, IPC is disabled 00:04:56.058 EAL: Heap on socket 0 was expanded by 130MB 00:04:56.058 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.317 EAL: request: mp_malloc_sync 00:04:56.317 EAL: No shared files mode enabled, IPC is disabled 00:04:56.317 EAL: Heap on socket 0 was shrunk by 130MB 00:04:56.317 EAL: Trying to obtain current memory policy. 00:04:56.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.317 EAL: Restoring previous memory policy: 4 00:04:56.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.317 EAL: request: mp_malloc_sync 00:04:56.317 EAL: No shared files mode enabled, IPC is disabled 00:04:56.317 EAL: Heap on socket 0 was expanded by 258MB 00:04:56.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.317 EAL: request: mp_malloc_sync 00:04:56.317 EAL: No shared files mode enabled, IPC is disabled 00:04:56.317 EAL: Heap on socket 0 was shrunk by 258MB 00:04:56.317 EAL: Trying to obtain current memory policy. 00:04:56.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.317 EAL: Restoring previous memory policy: 4 00:04:56.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.317 EAL: request: mp_malloc_sync 00:04:56.317 EAL: No shared files mode enabled, IPC is disabled 00:04:56.317 EAL: Heap on socket 0 was expanded by 514MB 00:04:56.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.576 EAL: request: mp_malloc_sync 00:04:56.576 EAL: No shared files mode enabled, IPC is disabled 00:04:56.576 EAL: Heap on socket 0 was shrunk by 514MB 00:04:56.576 EAL: Trying to obtain current memory policy. 00:04:56.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.835 EAL: Restoring previous memory policy: 4 00:04:56.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.835 EAL: request: mp_malloc_sync 00:04:56.835 EAL: No shared files mode enabled, IPC is disabled 00:04:56.835 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.094 EAL: request: mp_malloc_sync 00:04:57.094 EAL: No shared files mode enabled, IPC is disabled 00:04:57.094 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:57.094 passed 00:04:57.094 00:04:57.094 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.094 suites 1 1 n/a 0 0 00:04:57.094 tests 2 2 2 0 0 00:04:57.094 asserts 497 497 497 0 n/a 00:04:57.094 00:04:57.094 Elapsed time = 1.010 seconds 00:04:57.094 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.094 EAL: request: mp_malloc_sync 00:04:57.094 EAL: No shared files mode enabled, IPC is disabled 00:04:57.094 EAL: Heap on socket 0 was shrunk by 2MB 00:04:57.094 EAL: No shared files mode enabled, IPC is disabled 00:04:57.094 EAL: No shared files mode enabled, IPC is disabled 00:04:57.094 EAL: No shared files mode enabled, IPC is disabled 00:04:57.094 00:04:57.094 real 0m1.143s 00:04:57.094 user 0m0.665s 00:04:57.094 sys 0m0.453s 00:04:57.094 20:03:22 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.094 20:03:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:57.095 ************************************ 00:04:57.095 END TEST env_vtophys 00:04:57.095 ************************************ 00:04:57.095 20:03:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:57.095 20:03:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.095 20:03:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.095 20:03:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.095 20:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.095 ************************************ 00:04:57.095 START TEST env_pci 00:04:57.095 ************************************ 00:04:57.095 20:03:22 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:57.095 00:04:57.095 00:04:57.095 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.095 http://cunit.sourceforge.net/ 00:04:57.095 00:04:57.095 00:04:57.095 Suite: pci 00:04:57.095 Test: pci_hook ...[2024-07-15 20:03:22.432214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4033220 has claimed it 00:04:57.354 EAL: Cannot find device (10000:00:01.0) 00:04:57.354 EAL: Failed to attach device on primary process 00:04:57.354 passed 00:04:57.354 00:04:57.354 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.354 suites 1 1 n/a 0 0 00:04:57.354 tests 1 1 1 0 0 00:04:57.354 asserts 25 25 25 0 n/a 00:04:57.354 00:04:57.354 Elapsed time = 0.030 seconds 00:04:57.354 00:04:57.354 real 0m0.050s 00:04:57.354 user 0m0.013s 00:04:57.354 sys 0m0.037s 00:04:57.354 20:03:22 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.354 20:03:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:57.354 ************************************ 00:04:57.354 END TEST env_pci 00:04:57.354 ************************************ 00:04:57.354 20:03:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:57.354 20:03:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:57.354 20:03:22 env -- env/env.sh@15 -- # uname 00:04:57.354 20:03:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:57.354 20:03:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:57.354 20:03:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.354 20:03:22 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:57.354 20:03:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.354 20:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.354 ************************************ 00:04:57.354 START TEST env_dpdk_post_init 00:04:57.354 ************************************ 00:04:57.354 20:03:22 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:57.354 EAL: Detected CPU lcores: 112 00:04:57.354 EAL: Detected NUMA nodes: 2 00:04:57.354 EAL: Detected shared linkage of DPDK 00:04:57.354 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.354 EAL: Selected IOVA mode 'VA' 00:04:57.354 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.354 EAL: VFIO support initialized 00:04:57.354 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.354 EAL: Using IOMMU type 1 (Type 1) 00:04:57.354 EAL: Ignore mapping IO port bar(1) 00:04:57.354 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:57.614 EAL: Ignore mapping IO port bar(1) 00:04:57.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:58.550 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:05:01.839 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:05:01.839 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:05:01.839 Starting DPDK initialization... 00:05:01.839 Starting SPDK post initialization... 00:05:01.839 SPDK NVMe probe 00:05:01.839 Attaching to 0000:86:00.0 00:05:01.839 Attached to 0000:86:00.0 00:05:01.839 Cleaning up... 00:05:01.839 00:05:01.839 real 0m4.483s 00:05:01.839 user 0m3.378s 00:05:01.839 sys 0m0.159s 00:05:01.839 20:03:27 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.839 20:03:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.839 ************************************ 00:05:01.839 END TEST env_dpdk_post_init 00:05:01.839 ************************************ 00:05:01.839 20:03:27 env -- common/autotest_common.sh@1142 -- # return 0 00:05:01.839 20:03:27 env -- env/env.sh@26 -- # uname 00:05:01.839 20:03:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.839 20:03:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.839 20:03:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.839 20:03:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.839 20:03:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.839 ************************************ 00:05:01.839 START TEST env_mem_callbacks 00:05:01.839 ************************************ 00:05:01.839 20:03:27 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.839 EAL: Detected CPU lcores: 112 00:05:01.839 EAL: Detected NUMA nodes: 2 00:05:01.839 EAL: Detected shared linkage of DPDK 00:05:01.839 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.839 EAL: Selected IOVA mode 'VA' 00:05:01.839 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.839 EAL: VFIO support initialized 00:05:01.839 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.839 00:05:01.839 00:05:01.839 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.839 http://cunit.sourceforge.net/ 00:05:01.839 00:05:01.839 00:05:01.839 Suite: memory 00:05:01.839 Test: test ... 00:05:01.839 register 0x200000200000 2097152 00:05:01.839 malloc 3145728 00:05:01.839 register 0x200000400000 4194304 00:05:01.839 buf 0x200000500000 len 3145728 PASSED 00:05:01.839 malloc 64 00:05:01.839 buf 0x2000004fff40 len 64 PASSED 00:05:01.839 malloc 4194304 00:05:01.839 register 0x200000800000 6291456 00:05:01.839 buf 0x200000a00000 len 4194304 PASSED 00:05:01.839 free 0x200000500000 3145728 00:05:01.839 free 0x2000004fff40 64 00:05:01.839 unregister 0x200000400000 4194304 PASSED 00:05:01.839 free 0x200000a00000 4194304 00:05:01.839 unregister 0x200000800000 6291456 PASSED 00:05:01.839 malloc 8388608 00:05:01.839 register 0x200000400000 10485760 00:05:01.839 buf 0x200000600000 len 8388608 PASSED 00:05:01.839 free 0x200000600000 8388608 00:05:01.839 unregister 0x200000400000 10485760 PASSED 00:05:01.839 passed 00:05:01.839 00:05:01.839 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.839 suites 1 1 n/a 0 0 00:05:01.839 tests 1 1 1 0 0 00:05:01.839 asserts 15 15 15 0 n/a 00:05:01.839 00:05:01.839 Elapsed time = 0.007 seconds 00:05:01.839 00:05:01.839 real 0m0.062s 00:05:01.839 user 0m0.021s 00:05:01.839 sys 0m0.041s 00:05:01.839 20:03:27 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.839 20:03:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.839 ************************************ 00:05:01.839 END TEST env_mem_callbacks 00:05:01.839 ************************************ 00:05:02.099 20:03:27 env -- common/autotest_common.sh@1142 -- # return 0 00:05:02.099 00:05:02.099 real 0m6.404s 00:05:02.099 user 0m4.440s 00:05:02.099 sys 0m1.022s 00:05:02.099 20:03:27 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.099 20:03:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.099 ************************************ 00:05:02.099 END TEST env 00:05:02.099 ************************************ 00:05:02.099 20:03:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.099 20:03:27 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.099 20:03:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.099 20:03:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.099 20:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:02.099 ************************************ 00:05:02.099 START TEST rpc 00:05:02.099 ************************************ 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:02.099 * Looking for test storage... 00:05:02.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.099 20:03:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4034270 00:05:02.099 20:03:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.099 20:03:27 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:02.099 20:03:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4034270 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@829 -- # '[' -z 4034270 ']' 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.099 20:03:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.099 [2024-07-15 20:03:27.397954] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:02.099 [2024-07-15 20:03:27.398012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034270 ] 00:05:02.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.358 [2024-07-15 20:03:27.478458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.358 [2024-07-15 20:03:27.571544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.358 [2024-07-15 20:03:27.571588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4034270' to capture a snapshot of events at runtime. 00:05:02.358 [2024-07-15 20:03:27.571598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.358 [2024-07-15 20:03:27.571608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.358 [2024-07-15 20:03:27.571615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4034270 for offline analysis/debug. 00:05:02.358 [2024-07-15 20:03:27.571644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.295 20:03:28 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.295 20:03:28 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:03.295 20:03:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.295 20:03:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.295 20:03:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.295 20:03:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.295 20:03:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.295 20:03:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.295 20:03:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.295 ************************************ 00:05:03.295 START TEST rpc_integrity 00:05:03.296 ************************************ 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.296 { 00:05:03.296 "name": "Malloc0", 00:05:03.296 "aliases": [ 00:05:03.296 "e510d5e0-72fa-4738-bdb5-bf38a55bb5c2" 00:05:03.296 ], 00:05:03.296 "product_name": "Malloc disk", 00:05:03.296 "block_size": 512, 00:05:03.296 "num_blocks": 16384, 00:05:03.296 "uuid": "e510d5e0-72fa-4738-bdb5-bf38a55bb5c2", 00:05:03.296 "assigned_rate_limits": { 00:05:03.296 "rw_ios_per_sec": 0, 00:05:03.296 "rw_mbytes_per_sec": 0, 00:05:03.296 "r_mbytes_per_sec": 0, 00:05:03.296 "w_mbytes_per_sec": 0 00:05:03.296 }, 00:05:03.296 "claimed": false, 00:05:03.296 "zoned": false, 00:05:03.296 "supported_io_types": { 00:05:03.296 "read": true, 00:05:03.296 "write": true, 00:05:03.296 "unmap": true, 00:05:03.296 "flush": true, 00:05:03.296 "reset": true, 00:05:03.296 "nvme_admin": false, 00:05:03.296 "nvme_io": false, 00:05:03.296 "nvme_io_md": false, 00:05:03.296 "write_zeroes": true, 00:05:03.296 "zcopy": true, 00:05:03.296 "get_zone_info": false, 00:05:03.296 "zone_management": false, 00:05:03.296 "zone_append": false, 00:05:03.296 "compare": false, 00:05:03.296 "compare_and_write": false, 00:05:03.296 "abort": true, 00:05:03.296 "seek_hole": false, 00:05:03.296 "seek_data": false, 00:05:03.296 "copy": true, 00:05:03.296 "nvme_iov_md": false 00:05:03.296 }, 00:05:03.296 "memory_domains": [ 00:05:03.296 { 00:05:03.296 "dma_device_id": "system", 00:05:03.296 "dma_device_type": 1 00:05:03.296 }, 00:05:03.296 { 00:05:03.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.296 "dma_device_type": 2 00:05:03.296 } 00:05:03.296 ], 00:05:03.296 "driver_specific": {} 00:05:03.296 } 00:05:03.296 ]' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 [2024-07-15 20:03:28.499958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.296 [2024-07-15 20:03:28.499994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.296 [2024-07-15 20:03:28.500009] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21a8c80 00:05:03.296 [2024-07-15 20:03:28.500019] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.296 [2024-07-15 20:03:28.501534] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.296 [2024-07-15 20:03:28.501559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.296 Passthru0 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.296 { 00:05:03.296 "name": "Malloc0", 00:05:03.296 "aliases": [ 00:05:03.296 "e510d5e0-72fa-4738-bdb5-bf38a55bb5c2" 00:05:03.296 ], 00:05:03.296 "product_name": "Malloc disk", 00:05:03.296 "block_size": 512, 00:05:03.296 "num_blocks": 16384, 00:05:03.296 "uuid": "e510d5e0-72fa-4738-bdb5-bf38a55bb5c2", 00:05:03.296 "assigned_rate_limits": { 00:05:03.296 "rw_ios_per_sec": 0, 00:05:03.296 "rw_mbytes_per_sec": 0, 00:05:03.296 "r_mbytes_per_sec": 0, 00:05:03.296 "w_mbytes_per_sec": 0 00:05:03.296 }, 00:05:03.296 "claimed": true, 00:05:03.296 "claim_type": "exclusive_write", 00:05:03.296 "zoned": false, 00:05:03.296 "supported_io_types": { 00:05:03.296 "read": true, 00:05:03.296 "write": true, 00:05:03.296 "unmap": true, 00:05:03.296 "flush": true, 00:05:03.296 "reset": true, 00:05:03.296 "nvme_admin": false, 00:05:03.296 "nvme_io": false, 00:05:03.296 "nvme_io_md": false, 00:05:03.296 "write_zeroes": true, 00:05:03.296 "zcopy": true, 00:05:03.296 "get_zone_info": false, 00:05:03.296 "zone_management": false, 00:05:03.296 "zone_append": false, 00:05:03.296 "compare": false, 00:05:03.296 "compare_and_write": false, 00:05:03.296 "abort": true, 00:05:03.296 "seek_hole": false, 00:05:03.296 "seek_data": false, 00:05:03.296 "copy": true, 00:05:03.296 "nvme_iov_md": false 00:05:03.296 }, 00:05:03.296 "memory_domains": [ 00:05:03.296 { 00:05:03.296 "dma_device_id": "system", 00:05:03.296 "dma_device_type": 1 00:05:03.296 }, 00:05:03.296 { 00:05:03.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.296 "dma_device_type": 2 00:05:03.296 } 00:05:03.296 ], 00:05:03.296 "driver_specific": {} 00:05:03.296 }, 00:05:03.296 { 00:05:03.296 "name": "Passthru0", 00:05:03.296 "aliases": [ 00:05:03.296 "0d6b1478-1330-5cda-b8d1-4964dfec2947" 00:05:03.296 ], 00:05:03.296 "product_name": "passthru", 00:05:03.296 "block_size": 512, 00:05:03.296 "num_blocks": 16384, 00:05:03.296 "uuid": "0d6b1478-1330-5cda-b8d1-4964dfec2947", 00:05:03.296 "assigned_rate_limits": { 00:05:03.296 "rw_ios_per_sec": 0, 00:05:03.296 "rw_mbytes_per_sec": 0, 00:05:03.296 "r_mbytes_per_sec": 0, 00:05:03.296 "w_mbytes_per_sec": 0 00:05:03.296 }, 00:05:03.296 "claimed": false, 00:05:03.296 "zoned": false, 00:05:03.296 "supported_io_types": { 00:05:03.296 "read": true, 00:05:03.296 "write": true, 00:05:03.296 "unmap": true, 00:05:03.296 "flush": true, 00:05:03.296 "reset": true, 00:05:03.296 "nvme_admin": false, 00:05:03.296 "nvme_io": false, 00:05:03.296 "nvme_io_md": false, 00:05:03.296 "write_zeroes": true, 00:05:03.296 "zcopy": true, 00:05:03.296 "get_zone_info": false, 00:05:03.296 "zone_management": false, 00:05:03.296 "zone_append": false, 00:05:03.296 "compare": false, 00:05:03.296 "compare_and_write": false, 00:05:03.296 "abort": true, 00:05:03.296 "seek_hole": false, 00:05:03.296 "seek_data": false, 00:05:03.296 "copy": true, 00:05:03.296 "nvme_iov_md": false 00:05:03.296 }, 00:05:03.296 "memory_domains": [ 00:05:03.296 { 00:05:03.296 "dma_device_id": "system", 00:05:03.296 "dma_device_type": 1 00:05:03.296 }, 00:05:03.296 { 00:05:03.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.296 "dma_device_type": 2 00:05:03.296 } 00:05:03.296 ], 00:05:03.296 "driver_specific": { 00:05:03.296 "passthru": { 00:05:03.296 "name": "Passthru0", 00:05:03.296 "base_bdev_name": "Malloc0" 00:05:03.296 } 00:05:03.296 } 00:05:03.296 } 00:05:03.296 ]' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.296 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.556 20:03:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.556 00:05:03.556 real 0m0.298s 00:05:03.556 user 0m0.194s 00:05:03.556 sys 0m0.037s 00:05:03.556 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.556 20:03:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 ************************************ 00:05:03.556 END TEST rpc_integrity 00:05:03.556 ************************************ 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.556 20:03:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 ************************************ 00:05:03.556 START TEST rpc_plugins 00:05:03.556 ************************************ 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.556 { 00:05:03.556 "name": "Malloc1", 00:05:03.556 "aliases": [ 00:05:03.556 "74d2c003-4dad-49ea-a350-b32b13ce01c1" 00:05:03.556 ], 00:05:03.556 "product_name": "Malloc disk", 00:05:03.556 "block_size": 4096, 00:05:03.556 "num_blocks": 256, 00:05:03.556 "uuid": "74d2c003-4dad-49ea-a350-b32b13ce01c1", 00:05:03.556 "assigned_rate_limits": { 00:05:03.556 "rw_ios_per_sec": 0, 00:05:03.556 "rw_mbytes_per_sec": 0, 00:05:03.556 "r_mbytes_per_sec": 0, 00:05:03.556 "w_mbytes_per_sec": 0 00:05:03.556 }, 00:05:03.556 "claimed": false, 00:05:03.556 "zoned": false, 00:05:03.556 "supported_io_types": { 00:05:03.556 "read": true, 00:05:03.556 "write": true, 00:05:03.556 "unmap": true, 00:05:03.556 "flush": true, 00:05:03.556 "reset": true, 00:05:03.556 "nvme_admin": false, 00:05:03.556 "nvme_io": false, 00:05:03.556 "nvme_io_md": false, 00:05:03.556 "write_zeroes": true, 00:05:03.556 "zcopy": true, 00:05:03.556 "get_zone_info": false, 00:05:03.556 "zone_management": false, 00:05:03.556 "zone_append": false, 00:05:03.556 "compare": false, 00:05:03.556 "compare_and_write": false, 00:05:03.556 "abort": true, 00:05:03.556 "seek_hole": false, 00:05:03.556 "seek_data": false, 00:05:03.556 "copy": true, 00:05:03.556 "nvme_iov_md": false 00:05:03.556 }, 00:05:03.556 "memory_domains": [ 00:05:03.556 { 00:05:03.556 "dma_device_id": "system", 00:05:03.556 "dma_device_type": 1 00:05:03.556 }, 00:05:03.556 { 00:05:03.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.556 "dma_device_type": 2 00:05:03.556 } 00:05:03.556 ], 00:05:03.556 "driver_specific": {} 00:05:03.556 } 00:05:03.556 ]' 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.556 20:03:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.556 00:05:03.556 real 0m0.149s 00:05:03.556 user 0m0.098s 00:05:03.556 sys 0m0.017s 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.556 20:03:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.556 ************************************ 00:05:03.556 END TEST rpc_plugins 00:05:03.556 ************************************ 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.556 20:03:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.556 20:03:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.815 ************************************ 00:05:03.815 START TEST rpc_trace_cmd_test 00:05:03.815 ************************************ 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.816 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4034270", 00:05:03.816 "tpoint_group_mask": "0x8", 00:05:03.816 "iscsi_conn": { 00:05:03.816 "mask": "0x2", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "scsi": { 00:05:03.816 "mask": "0x4", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "bdev": { 00:05:03.816 "mask": "0x8", 00:05:03.816 "tpoint_mask": "0xffffffffffffffff" 00:05:03.816 }, 00:05:03.816 "nvmf_rdma": { 00:05:03.816 "mask": "0x10", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "nvmf_tcp": { 00:05:03.816 "mask": "0x20", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "ftl": { 00:05:03.816 "mask": "0x40", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "blobfs": { 00:05:03.816 "mask": "0x80", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "dsa": { 00:05:03.816 "mask": "0x200", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "thread": { 00:05:03.816 "mask": "0x400", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "nvme_pcie": { 00:05:03.816 "mask": "0x800", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "iaa": { 00:05:03.816 "mask": "0x1000", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "nvme_tcp": { 00:05:03.816 "mask": "0x2000", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "bdev_nvme": { 00:05:03.816 "mask": "0x4000", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 }, 00:05:03.816 "sock": { 00:05:03.816 "mask": "0x8000", 00:05:03.816 "tpoint_mask": "0x0" 00:05:03.816 } 00:05:03.816 }' 00:05:03.816 20:03:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.816 20:03:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.816 00:05:03.816 real 0m0.228s 00:05:03.816 user 0m0.196s 00:05:03.816 sys 0m0.025s 00:05:04.075 20:03:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.075 20:03:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.075 ************************************ 00:05:04.075 END TEST rpc_trace_cmd_test 00:05:04.075 ************************************ 00:05:04.075 20:03:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.075 20:03:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.075 20:03:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.075 20:03:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.075 20:03:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.075 20:03:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.075 20:03:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.075 ************************************ 00:05:04.075 START TEST rpc_daemon_integrity 00:05:04.075 ************************************ 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.075 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.075 { 00:05:04.075 "name": "Malloc2", 00:05:04.075 "aliases": [ 00:05:04.075 "0996b019-c548-4fc4-9d7c-c0d2b867c9e5" 00:05:04.075 ], 00:05:04.075 "product_name": "Malloc disk", 00:05:04.075 "block_size": 512, 00:05:04.075 "num_blocks": 16384, 00:05:04.075 "uuid": "0996b019-c548-4fc4-9d7c-c0d2b867c9e5", 00:05:04.075 "assigned_rate_limits": { 00:05:04.075 "rw_ios_per_sec": 0, 00:05:04.075 "rw_mbytes_per_sec": 0, 00:05:04.075 "r_mbytes_per_sec": 0, 00:05:04.075 "w_mbytes_per_sec": 0 00:05:04.075 }, 00:05:04.075 "claimed": false, 00:05:04.075 "zoned": false, 00:05:04.075 "supported_io_types": { 00:05:04.075 "read": true, 00:05:04.075 "write": true, 00:05:04.075 "unmap": true, 00:05:04.075 "flush": true, 00:05:04.075 "reset": true, 00:05:04.075 "nvme_admin": false, 00:05:04.075 "nvme_io": false, 00:05:04.075 "nvme_io_md": false, 00:05:04.075 "write_zeroes": true, 00:05:04.075 "zcopy": true, 00:05:04.075 "get_zone_info": false, 00:05:04.076 "zone_management": false, 00:05:04.076 "zone_append": false, 00:05:04.076 "compare": false, 00:05:04.076 "compare_and_write": false, 00:05:04.076 "abort": true, 00:05:04.076 "seek_hole": false, 00:05:04.076 "seek_data": false, 00:05:04.076 "copy": true, 00:05:04.076 "nvme_iov_md": false 00:05:04.076 }, 00:05:04.076 "memory_domains": [ 00:05:04.076 { 00:05:04.076 "dma_device_id": "system", 00:05:04.076 "dma_device_type": 1 00:05:04.076 }, 00:05:04.076 { 00:05:04.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.076 "dma_device_type": 2 00:05:04.076 } 00:05:04.076 ], 00:05:04.076 "driver_specific": {} 00:05:04.076 } 00:05:04.076 ]' 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.076 [2024-07-15 20:03:29.382482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.076 [2024-07-15 20:03:29.382517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.076 [2024-07-15 20:03:29.382535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21aa1c0 00:05:04.076 [2024-07-15 20:03:29.382544] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.076 [2024-07-15 20:03:29.383916] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.076 [2024-07-15 20:03:29.383939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.076 Passthru0 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.076 { 00:05:04.076 "name": "Malloc2", 00:05:04.076 "aliases": [ 00:05:04.076 "0996b019-c548-4fc4-9d7c-c0d2b867c9e5" 00:05:04.076 ], 00:05:04.076 "product_name": "Malloc disk", 00:05:04.076 "block_size": 512, 00:05:04.076 "num_blocks": 16384, 00:05:04.076 "uuid": "0996b019-c548-4fc4-9d7c-c0d2b867c9e5", 00:05:04.076 "assigned_rate_limits": { 00:05:04.076 "rw_ios_per_sec": 0, 00:05:04.076 "rw_mbytes_per_sec": 0, 00:05:04.076 "r_mbytes_per_sec": 0, 00:05:04.076 "w_mbytes_per_sec": 0 00:05:04.076 }, 00:05:04.076 "claimed": true, 00:05:04.076 "claim_type": "exclusive_write", 00:05:04.076 "zoned": false, 00:05:04.076 "supported_io_types": { 00:05:04.076 "read": true, 00:05:04.076 "write": true, 00:05:04.076 "unmap": true, 00:05:04.076 "flush": true, 00:05:04.076 "reset": true, 00:05:04.076 "nvme_admin": false, 00:05:04.076 "nvme_io": false, 00:05:04.076 "nvme_io_md": false, 00:05:04.076 "write_zeroes": true, 00:05:04.076 "zcopy": true, 00:05:04.076 "get_zone_info": false, 00:05:04.076 "zone_management": false, 00:05:04.076 "zone_append": false, 00:05:04.076 "compare": false, 00:05:04.076 "compare_and_write": false, 00:05:04.076 "abort": true, 00:05:04.076 "seek_hole": false, 00:05:04.076 "seek_data": false, 00:05:04.076 "copy": true, 00:05:04.076 "nvme_iov_md": false 00:05:04.076 }, 00:05:04.076 "memory_domains": [ 00:05:04.076 { 00:05:04.076 "dma_device_id": "system", 00:05:04.076 "dma_device_type": 1 00:05:04.076 }, 00:05:04.076 { 00:05:04.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.076 "dma_device_type": 2 00:05:04.076 } 00:05:04.076 ], 00:05:04.076 "driver_specific": {} 00:05:04.076 }, 00:05:04.076 { 00:05:04.076 "name": "Passthru0", 00:05:04.076 "aliases": [ 00:05:04.076 "dde21172-cd9d-510d-a2c7-1b63712054be" 00:05:04.076 ], 00:05:04.076 "product_name": "passthru", 00:05:04.076 "block_size": 512, 00:05:04.076 "num_blocks": 16384, 00:05:04.076 "uuid": "dde21172-cd9d-510d-a2c7-1b63712054be", 00:05:04.076 "assigned_rate_limits": { 00:05:04.076 "rw_ios_per_sec": 0, 00:05:04.076 "rw_mbytes_per_sec": 0, 00:05:04.076 "r_mbytes_per_sec": 0, 00:05:04.076 "w_mbytes_per_sec": 0 00:05:04.076 }, 00:05:04.076 "claimed": false, 00:05:04.076 "zoned": false, 00:05:04.076 "supported_io_types": { 00:05:04.076 "read": true, 00:05:04.076 "write": true, 00:05:04.076 "unmap": true, 00:05:04.076 "flush": true, 00:05:04.076 "reset": true, 00:05:04.076 "nvme_admin": false, 00:05:04.076 "nvme_io": false, 00:05:04.076 "nvme_io_md": false, 00:05:04.076 "write_zeroes": true, 00:05:04.076 "zcopy": true, 00:05:04.076 "get_zone_info": false, 00:05:04.076 "zone_management": false, 00:05:04.076 "zone_append": false, 00:05:04.076 "compare": false, 00:05:04.076 "compare_and_write": false, 00:05:04.076 "abort": true, 00:05:04.076 "seek_hole": false, 00:05:04.076 "seek_data": false, 00:05:04.076 "copy": true, 00:05:04.076 "nvme_iov_md": false 00:05:04.076 }, 00:05:04.076 "memory_domains": [ 00:05:04.076 { 00:05:04.076 "dma_device_id": "system", 00:05:04.076 "dma_device_type": 1 00:05:04.076 }, 00:05:04.076 { 00:05:04.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.076 "dma_device_type": 2 00:05:04.076 } 00:05:04.076 ], 00:05:04.076 "driver_specific": { 00:05:04.076 "passthru": { 00:05:04.076 "name": "Passthru0", 00:05:04.076 "base_bdev_name": "Malloc2" 00:05:04.076 } 00:05:04.076 } 00:05:04.076 } 00:05:04.076 ]' 00:05:04.076 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.335 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.336 00:05:04.336 real 0m0.302s 00:05:04.336 user 0m0.196s 00:05:04.336 sys 0m0.038s 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.336 20:03:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.336 ************************************ 00:05:04.336 END TEST rpc_daemon_integrity 00:05:04.336 ************************************ 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:04.336 20:03:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.336 20:03:29 rpc -- rpc/rpc.sh@84 -- # killprocess 4034270 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@948 -- # '[' -z 4034270 ']' 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@952 -- # kill -0 4034270 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@953 -- # uname 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034270 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034270' 00:05:04.336 killing process with pid 4034270 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@967 -- # kill 4034270 00:05:04.336 20:03:29 rpc -- common/autotest_common.sh@972 -- # wait 4034270 00:05:04.904 00:05:04.904 real 0m2.694s 00:05:04.904 user 0m3.584s 00:05:04.904 sys 0m0.706s 00:05:04.904 20:03:29 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.904 20:03:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.904 ************************************ 00:05:04.904 END TEST rpc 00:05:04.904 ************************************ 00:05:04.904 20:03:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.904 20:03:29 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.904 20:03:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.904 20:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.904 20:03:29 -- common/autotest_common.sh@10 -- # set +x 00:05:04.904 ************************************ 00:05:04.904 START TEST skip_rpc 00:05:04.904 ************************************ 00:05:04.904 20:03:30 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:04.904 * Looking for test storage... 00:05:04.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:04.904 20:03:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.904 20:03:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.904 20:03:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.904 20:03:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.904 20:03:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.904 20:03:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.904 ************************************ 00:05:04.904 START TEST skip_rpc 00:05:04.904 ************************************ 00:05:04.904 20:03:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:04.904 20:03:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4034973 00:05:04.904 20:03:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.904 20:03:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.904 20:03:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.904 [2024-07-15 20:03:30.182009] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:04.904 [2024-07-15 20:03:30.182059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034973 ] 00:05:04.904 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.163 [2024-07-15 20:03:30.264706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.163 [2024-07-15 20:03:30.351281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4034973 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 4034973 ']' 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 4034973 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034973 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034973' 00:05:10.435 killing process with pid 4034973 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 4034973 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 4034973 00:05:10.435 00:05:10.435 real 0m5.387s 00:05:10.435 user 0m5.121s 00:05:10.435 sys 0m0.284s 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.435 20:03:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 ************************************ 00:05:10.435 END TEST skip_rpc 00:05:10.435 ************************************ 00:05:10.435 20:03:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.435 20:03:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.435 20:03:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.435 20:03:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.435 20:03:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 ************************************ 00:05:10.435 START TEST skip_rpc_with_json 00:05:10.435 ************************************ 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4036042 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4036042 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 4036042 ']' 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.435 20:03:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.435 [2024-07-15 20:03:35.630345] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:10.435 [2024-07-15 20:03:35.630395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036042 ] 00:05:10.435 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.435 [2024-07-15 20:03:35.710084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.694 [2024-07-15 20:03:35.803328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.953 [2024-07-15 20:03:36.088197] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:10.953 request: 00:05:10.953 { 00:05:10.953 "trtype": "tcp", 00:05:10.953 "method": "nvmf_get_transports", 00:05:10.953 "req_id": 1 00:05:10.953 } 00:05:10.953 Got JSON-RPC error response 00:05:10.953 response: 00:05:10.953 { 00:05:10.953 "code": -19, 00:05:10.953 "message": "No such device" 00:05:10.953 } 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.953 [2024-07-15 20:03:36.096330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.953 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.953 { 00:05:10.953 "subsystems": [ 00:05:10.953 { 00:05:10.953 "subsystem": "vfio_user_target", 00:05:10.953 "config": null 00:05:10.953 }, 00:05:10.953 { 00:05:10.953 "subsystem": "keyring", 00:05:10.953 "config": [] 00:05:10.953 }, 00:05:10.953 { 00:05:10.953 "subsystem": "iobuf", 00:05:10.953 "config": [ 00:05:10.953 { 00:05:10.953 "method": "iobuf_set_options", 00:05:10.953 "params": { 00:05:10.953 "small_pool_count": 8192, 00:05:10.953 "large_pool_count": 1024, 00:05:10.954 "small_bufsize": 8192, 00:05:10.954 "large_bufsize": 135168 00:05:10.954 } 00:05:10.954 } 00:05:10.954 ] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "sock", 00:05:10.954 "config": [ 00:05:10.954 { 00:05:10.954 "method": "sock_set_default_impl", 00:05:10.954 "params": { 00:05:10.954 "impl_name": "posix" 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "sock_impl_set_options", 00:05:10.954 "params": { 00:05:10.954 "impl_name": "ssl", 00:05:10.954 "recv_buf_size": 4096, 00:05:10.954 "send_buf_size": 4096, 00:05:10.954 "enable_recv_pipe": true, 00:05:10.954 "enable_quickack": false, 00:05:10.954 "enable_placement_id": 0, 00:05:10.954 "enable_zerocopy_send_server": true, 00:05:10.954 "enable_zerocopy_send_client": false, 00:05:10.954 "zerocopy_threshold": 0, 00:05:10.954 "tls_version": 0, 00:05:10.954 "enable_ktls": false 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "sock_impl_set_options", 00:05:10.954 "params": { 00:05:10.954 "impl_name": "posix", 00:05:10.954 "recv_buf_size": 2097152, 00:05:10.954 "send_buf_size": 2097152, 00:05:10.954 "enable_recv_pipe": true, 00:05:10.954 "enable_quickack": false, 00:05:10.954 "enable_placement_id": 0, 00:05:10.954 "enable_zerocopy_send_server": true, 00:05:10.954 "enable_zerocopy_send_client": false, 00:05:10.954 "zerocopy_threshold": 0, 00:05:10.954 "tls_version": 0, 00:05:10.954 "enable_ktls": false 00:05:10.954 } 00:05:10.954 } 00:05:10.954 ] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "vmd", 00:05:10.954 "config": [] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "accel", 00:05:10.954 "config": [ 00:05:10.954 { 00:05:10.954 "method": "accel_set_options", 00:05:10.954 "params": { 00:05:10.954 "small_cache_size": 128, 00:05:10.954 "large_cache_size": 16, 00:05:10.954 "task_count": 2048, 00:05:10.954 "sequence_count": 2048, 00:05:10.954 "buf_count": 2048 00:05:10.954 } 00:05:10.954 } 00:05:10.954 ] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "bdev", 00:05:10.954 "config": [ 00:05:10.954 { 00:05:10.954 "method": "bdev_set_options", 00:05:10.954 "params": { 00:05:10.954 "bdev_io_pool_size": 65535, 00:05:10.954 "bdev_io_cache_size": 256, 00:05:10.954 "bdev_auto_examine": true, 00:05:10.954 "iobuf_small_cache_size": 128, 00:05:10.954 "iobuf_large_cache_size": 16 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "bdev_raid_set_options", 00:05:10.954 "params": { 00:05:10.954 "process_window_size_kb": 1024 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "bdev_iscsi_set_options", 00:05:10.954 "params": { 00:05:10.954 "timeout_sec": 30 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "bdev_nvme_set_options", 00:05:10.954 "params": { 00:05:10.954 "action_on_timeout": "none", 00:05:10.954 "timeout_us": 0, 00:05:10.954 "timeout_admin_us": 0, 00:05:10.954 "keep_alive_timeout_ms": 10000, 00:05:10.954 "arbitration_burst": 0, 00:05:10.954 "low_priority_weight": 0, 00:05:10.954 "medium_priority_weight": 0, 00:05:10.954 "high_priority_weight": 0, 00:05:10.954 "nvme_adminq_poll_period_us": 10000, 00:05:10.954 "nvme_ioq_poll_period_us": 0, 00:05:10.954 "io_queue_requests": 0, 00:05:10.954 "delay_cmd_submit": true, 00:05:10.954 "transport_retry_count": 4, 00:05:10.954 "bdev_retry_count": 3, 00:05:10.954 "transport_ack_timeout": 0, 00:05:10.954 "ctrlr_loss_timeout_sec": 0, 00:05:10.954 "reconnect_delay_sec": 0, 00:05:10.954 "fast_io_fail_timeout_sec": 0, 00:05:10.954 "disable_auto_failback": false, 00:05:10.954 "generate_uuids": false, 00:05:10.954 "transport_tos": 0, 00:05:10.954 "nvme_error_stat": false, 00:05:10.954 "rdma_srq_size": 0, 00:05:10.954 "io_path_stat": false, 00:05:10.954 "allow_accel_sequence": false, 00:05:10.954 "rdma_max_cq_size": 0, 00:05:10.954 "rdma_cm_event_timeout_ms": 0, 00:05:10.954 "dhchap_digests": [ 00:05:10.954 "sha256", 00:05:10.954 "sha384", 00:05:10.954 "sha512" 00:05:10.954 ], 00:05:10.954 "dhchap_dhgroups": [ 00:05:10.954 "null", 00:05:10.954 "ffdhe2048", 00:05:10.954 "ffdhe3072", 00:05:10.954 "ffdhe4096", 00:05:10.954 "ffdhe6144", 00:05:10.954 "ffdhe8192" 00:05:10.954 ] 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "bdev_nvme_set_hotplug", 00:05:10.954 "params": { 00:05:10.954 "period_us": 100000, 00:05:10.954 "enable": false 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "bdev_wait_for_examine" 00:05:10.954 } 00:05:10.954 ] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "scsi", 00:05:10.954 "config": null 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "scheduler", 00:05:10.954 "config": [ 00:05:10.954 { 00:05:10.954 "method": "framework_set_scheduler", 00:05:10.954 "params": { 00:05:10.954 "name": "static" 00:05:10.954 } 00:05:10.954 } 00:05:10.954 ] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "vhost_scsi", 00:05:10.954 "config": [] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "vhost_blk", 00:05:10.954 "config": [] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "ublk", 00:05:10.954 "config": [] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "nbd", 00:05:10.954 "config": [] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "nvmf", 00:05:10.954 "config": [ 00:05:10.954 { 00:05:10.954 "method": "nvmf_set_config", 00:05:10.954 "params": { 00:05:10.954 "discovery_filter": "match_any", 00:05:10.954 "admin_cmd_passthru": { 00:05:10.954 "identify_ctrlr": false 00:05:10.954 } 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "nvmf_set_max_subsystems", 00:05:10.954 "params": { 00:05:10.954 "max_subsystems": 1024 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "nvmf_set_crdt", 00:05:10.954 "params": { 00:05:10.954 "crdt1": 0, 00:05:10.954 "crdt2": 0, 00:05:10.954 "crdt3": 0 00:05:10.954 } 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "method": "nvmf_create_transport", 00:05:10.954 "params": { 00:05:10.954 "trtype": "TCP", 00:05:10.954 "max_queue_depth": 128, 00:05:10.954 "max_io_qpairs_per_ctrlr": 127, 00:05:10.954 "in_capsule_data_size": 4096, 00:05:10.954 "max_io_size": 131072, 00:05:10.954 "io_unit_size": 131072, 00:05:10.954 "max_aq_depth": 128, 00:05:10.954 "num_shared_buffers": 511, 00:05:10.954 "buf_cache_size": 4294967295, 00:05:10.954 "dif_insert_or_strip": false, 00:05:10.954 "zcopy": false, 00:05:10.954 "c2h_success": true, 00:05:10.954 "sock_priority": 0, 00:05:10.954 "abort_timeout_sec": 1, 00:05:10.954 "ack_timeout": 0, 00:05:10.954 "data_wr_pool_size": 0 00:05:10.954 } 00:05:10.954 } 00:05:10.954 ] 00:05:10.954 }, 00:05:10.954 { 00:05:10.954 "subsystem": "iscsi", 00:05:10.954 "config": [ 00:05:10.954 { 00:05:10.954 "method": "iscsi_set_options", 00:05:10.954 "params": { 00:05:10.954 "node_base": "iqn.2016-06.io.spdk", 00:05:10.954 "max_sessions": 128, 00:05:10.955 "max_connections_per_session": 2, 00:05:10.955 "max_queue_depth": 64, 00:05:10.955 "default_time2wait": 2, 00:05:10.955 "default_time2retain": 20, 00:05:10.955 "first_burst_length": 8192, 00:05:10.955 "immediate_data": true, 00:05:10.955 "allow_duplicated_isid": false, 00:05:10.955 "error_recovery_level": 0, 00:05:10.955 "nop_timeout": 60, 00:05:10.955 "nop_in_interval": 30, 00:05:10.955 "disable_chap": false, 00:05:10.955 "require_chap": false, 00:05:10.955 "mutual_chap": false, 00:05:10.955 "chap_group": 0, 00:05:10.955 "max_large_datain_per_connection": 64, 00:05:10.955 "max_r2t_per_connection": 4, 00:05:10.955 "pdu_pool_size": 36864, 00:05:10.955 "immediate_data_pool_size": 16384, 00:05:10.955 "data_out_pool_size": 2048 00:05:10.955 } 00:05:10.955 } 00:05:10.955 ] 00:05:10.955 } 00:05:10.955 ] 00:05:10.955 } 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4036042 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4036042 ']' 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4036042 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4036042 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4036042' 00:05:10.955 killing process with pid 4036042 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4036042 00:05:10.955 20:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4036042 00:05:11.534 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4036080 00:05:11.534 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:11.534 20:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4036080 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4036080 ']' 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4036080 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4036080 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4036080' 00:05:16.847 killing process with pid 4036080 00:05:16.847 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4036080 00:05:16.848 20:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4036080 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:16.848 00:05:16.848 real 0m6.429s 00:05:16.848 user 0m6.351s 00:05:16.848 sys 0m0.625s 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.848 ************************************ 00:05:16.848 END TEST skip_rpc_with_json 00:05:16.848 ************************************ 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.848 20:03:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.848 ************************************ 00:05:16.848 START TEST skip_rpc_with_delay 00:05:16.848 ************************************ 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.848 [2024-07-15 20:03:42.130317] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:16.848 [2024-07-15 20:03:42.130392] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.848 00:05:16.848 real 0m0.071s 00:05:16.848 user 0m0.051s 00:05:16.848 sys 0m0.019s 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.848 20:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:16.848 ************************************ 00:05:16.848 END TEST skip_rpc_with_delay 00:05:16.848 ************************************ 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.848 20:03:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:16.848 20:03:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:16.848 20:03:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.848 20:03:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.106 ************************************ 00:05:17.106 START TEST exit_on_failed_rpc_init 00:05:17.106 ************************************ 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4037165 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4037165 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 4037165 ']' 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.106 20:03:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.106 [2024-07-15 20:03:42.262810] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:17.106 [2024-07-15 20:03:42.262864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037165 ] 00:05:17.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.106 [2024-07-15 20:03:42.344903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.106 [2024-07-15 20:03:42.434740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:18.041 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.041 [2024-07-15 20:03:43.244584] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:18.041 [2024-07-15 20:03:43.244644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037430 ] 00:05:18.041 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.041 [2024-07-15 20:03:43.314785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.299 [2024-07-15 20:03:43.401714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.299 [2024-07-15 20:03:43.401790] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.299 [2024-07-15 20:03:43.401804] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.299 [2024-07-15 20:03:43.401815] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4037165 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 4037165 ']' 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 4037165 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4037165 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4037165' 00:05:18.299 killing process with pid 4037165 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 4037165 00:05:18.299 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 4037165 00:05:18.556 00:05:18.556 real 0m1.660s 00:05:18.556 user 0m2.008s 00:05:18.556 sys 0m0.443s 00:05:18.556 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.556 20:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.556 ************************************ 00:05:18.556 END TEST exit_on_failed_rpc_init 00:05:18.556 ************************************ 00:05:18.556 20:03:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:18.556 20:03:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.556 00:05:18.556 real 0m13.878s 00:05:18.556 user 0m13.660s 00:05:18.556 sys 0m1.593s 00:05:18.556 20:03:43 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.556 20:03:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.556 ************************************ 00:05:18.556 END TEST skip_rpc 00:05:18.556 ************************************ 00:05:18.815 20:03:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.815 20:03:43 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.815 20:03:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.815 20:03:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.815 20:03:43 -- common/autotest_common.sh@10 -- # set +x 00:05:18.815 ************************************ 00:05:18.815 START TEST rpc_client 00:05:18.815 ************************************ 00:05:18.815 20:03:43 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:18.815 * Looking for test storage... 00:05:18.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:18.815 20:03:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:18.815 OK 00:05:18.815 20:03:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.815 00:05:18.815 real 0m0.111s 00:05:18.815 user 0m0.059s 00:05:18.815 sys 0m0.059s 00:05:18.815 20:03:44 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.815 20:03:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:18.815 ************************************ 00:05:18.815 END TEST rpc_client 00:05:18.815 ************************************ 00:05:18.815 20:03:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.815 20:03:44 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:18.815 20:03:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.815 20:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.815 20:03:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.815 ************************************ 00:05:18.815 START TEST json_config 00:05:18.815 ************************************ 00:05:18.815 20:03:44 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:19.074 20:03:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.074 20:03:44 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.074 20:03:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.074 20:03:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.074 20:03:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.074 20:03:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.075 20:03:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.075 20:03:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.075 20:03:44 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.075 20:03:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@47 -- # : 0 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.075 20:03:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:19.075 INFO: JSON configuration test init 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.075 20:03:44 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.075 20:03:44 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.075 20:03:44 json_config -- json_config/common.sh@10 -- # shift 00:05:19.075 20:03:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.075 20:03:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.075 20:03:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.075 20:03:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.075 20:03:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.075 20:03:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4037800 00:05:19.075 20:03:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.075 Waiting for target to run... 00:05:19.075 20:03:44 json_config -- json_config/common.sh@25 -- # waitforlisten 4037800 /var/tmp/spdk_tgt.sock 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@829 -- # '[' -z 4037800 ']' 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.075 20:03:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.075 20:03:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.075 [2024-07-15 20:03:44.284502] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:19.075 [2024-07-15 20:03:44.284548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037800 ] 00:05:19.075 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.643 [2024-07-15 20:03:44.705140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.643 [2024-07-15 20:03:44.810883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.902 20:03:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.902 20:03:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:19.902 20:03:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:19.902 00:05:19.902 20:03:45 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:19.902 20:03:45 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:19.902 20:03:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.902 20:03:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.902 20:03:45 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:19.902 20:03:45 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:19.902 20:03:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.902 20:03:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.161 20:03:45 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.161 20:03:45 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:20.161 20:03:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:23.481 20:03:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.481 20:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:23.481 20:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:23.481 20:03:48 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:23.481 20:03:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.481 20:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:23.482 20:03:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:23.482 20:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:23.482 20:03:48 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:23.482 20:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:23.741 MallocForNvmf0 00:05:23.741 20:03:48 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:23.741 20:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:23.741 MallocForNvmf1 00:05:23.741 20:03:49 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:23.741 20:03:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:24.000 [2024-07-15 20:03:49.194811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.000 20:03:49 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.000 20:03:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:24.260 20:03:49 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.260 20:03:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:24.260 20:03:49 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.260 20:03:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:24.519 20:03:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.519 20:03:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:24.777 [2024-07-15 20:03:49.917376] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.777 20:03:49 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:24.777 20:03:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.777 20:03:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.777 20:03:49 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:24.777 20:03:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.777 20:03:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.777 20:03:50 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:24.777 20:03:50 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:24.777 20:03:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:25.036 MallocBdevForConfigChangeCheck 00:05:25.036 20:03:50 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:25.036 20:03:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.036 20:03:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.036 20:03:50 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:25.036 20:03:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:25.603 20:03:50 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:25.603 INFO: shutting down applications... 00:05:25.603 20:03:50 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:25.603 20:03:50 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:25.603 20:03:50 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:25.603 20:03:50 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:26.981 Calling clear_iscsi_subsystem 00:05:26.981 Calling clear_nvmf_subsystem 00:05:26.981 Calling clear_nbd_subsystem 00:05:26.981 Calling clear_ublk_subsystem 00:05:26.981 Calling clear_vhost_blk_subsystem 00:05:26.981 Calling clear_vhost_scsi_subsystem 00:05:26.981 Calling clear_bdev_subsystem 00:05:26.981 20:03:52 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:26.981 20:03:52 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:26.981 20:03:52 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:26.981 20:03:52 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.981 20:03:52 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:26.981 20:03:52 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:27.549 20:03:52 json_config -- json_config/json_config.sh@345 -- # break 00:05:27.549 20:03:52 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:27.549 20:03:52 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:27.549 20:03:52 json_config -- json_config/common.sh@31 -- # local app=target 00:05:27.549 20:03:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.549 20:03:52 json_config -- json_config/common.sh@35 -- # [[ -n 4037800 ]] 00:05:27.549 20:03:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4037800 00:05:27.549 20:03:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.549 20:03:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.549 20:03:52 json_config -- json_config/common.sh@41 -- # kill -0 4037800 00:05:27.549 20:03:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.117 20:03:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.117 20:03:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.117 20:03:53 json_config -- json_config/common.sh@41 -- # kill -0 4037800 00:05:28.117 20:03:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.117 20:03:53 json_config -- json_config/common.sh@43 -- # break 00:05:28.117 20:03:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.117 20:03:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.117 SPDK target shutdown done 00:05:28.117 20:03:53 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:28.117 INFO: relaunching applications... 00:05:28.117 20:03:53 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.117 20:03:53 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.117 20:03:53 json_config -- json_config/common.sh@10 -- # shift 00:05:28.117 20:03:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.117 20:03:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.117 20:03:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.117 20:03:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.117 20:03:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.117 20:03:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4039502 00:05:28.117 20:03:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.117 Waiting for target to run... 00:05:28.117 20:03:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.117 20:03:53 json_config -- json_config/common.sh@25 -- # waitforlisten 4039502 /var/tmp/spdk_tgt.sock 00:05:28.117 20:03:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 4039502 ']' 00:05:28.117 20:03:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.117 20:03:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.117 20:03:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.117 20:03:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.117 20:03:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.117 [2024-07-15 20:03:53.250175] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:28.117 [2024-07-15 20:03:53.250246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039502 ] 00:05:28.117 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.394 [2024-07-15 20:03:53.702790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.653 [2024-07-15 20:03:53.804941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.941 [2024-07-15 20:03:56.848359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:31.941 [2024-07-15 20:03:56.880697] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.200 20:03:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.200 20:03:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.200 20:03:57 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.200 00:05:32.200 20:03:57 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:32.200 20:03:57 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.200 INFO: Checking if target configuration is the same... 00:05:32.200 20:03:57 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.200 20:03:57 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:32.200 20:03:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.200 + '[' 2 -ne 2 ']' 00:05:32.200 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.200 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.200 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.200 +++ basename /dev/fd/62 00:05:32.200 ++ mktemp /tmp/62.XXX 00:05:32.200 + tmp_file_1=/tmp/62.o6t 00:05:32.200 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.200 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.459 + tmp_file_2=/tmp/spdk_tgt_config.json.1Ne 00:05:32.459 + ret=0 00:05:32.459 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.718 + diff -u /tmp/62.o6t /tmp/spdk_tgt_config.json.1Ne 00:05:32.718 + echo 'INFO: JSON config files are the same' 00:05:32.718 INFO: JSON config files are the same 00:05:32.718 + rm /tmp/62.o6t /tmp/spdk_tgt_config.json.1Ne 00:05:32.718 + exit 0 00:05:32.718 20:03:57 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:32.718 20:03:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.718 INFO: changing configuration and checking if this can be detected... 00:05:32.718 20:03:57 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.718 20:03:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.718 20:03:58 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.718 20:03:58 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:32.718 20:03:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.718 + '[' 2 -ne 2 ']' 00:05:32.718 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.718 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:32.718 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:32.718 +++ basename /dev/fd/62 00:05:32.718 ++ mktemp /tmp/62.XXX 00:05:32.718 + tmp_file_1=/tmp/62.rgQ 00:05:32.718 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.718 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.977 + tmp_file_2=/tmp/spdk_tgt_config.json.8IB 00:05:32.977 + ret=0 00:05:32.977 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.251 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:33.251 + diff -u /tmp/62.rgQ /tmp/spdk_tgt_config.json.8IB 00:05:33.251 + ret=1 00:05:33.251 + echo '=== Start of file: /tmp/62.rgQ ===' 00:05:33.251 + cat /tmp/62.rgQ 00:05:33.251 + echo '=== End of file: /tmp/62.rgQ ===' 00:05:33.251 + echo '' 00:05:33.251 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8IB ===' 00:05:33.251 + cat /tmp/spdk_tgt_config.json.8IB 00:05:33.252 + echo '=== End of file: /tmp/spdk_tgt_config.json.8IB ===' 00:05:33.252 + echo '' 00:05:33.252 + rm /tmp/62.rgQ /tmp/spdk_tgt_config.json.8IB 00:05:33.252 + exit 1 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:33.252 INFO: configuration change detected. 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@317 -- # [[ -n 4039502 ]] 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.252 20:03:58 json_config -- json_config/json_config.sh@323 -- # killprocess 4039502 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@948 -- # '[' -z 4039502 ']' 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@952 -- # kill -0 4039502 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@953 -- # uname 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4039502 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4039502' 00:05:33.252 killing process with pid 4039502 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@967 -- # kill 4039502 00:05:33.252 20:03:58 json_config -- common/autotest_common.sh@972 -- # wait 4039502 00:05:34.739 20:04:00 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.739 20:04:00 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:34.739 20:04:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.739 20:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.999 20:04:00 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:34.999 20:04:00 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:34.999 INFO: Success 00:05:34.999 00:05:34.999 real 0m15.984s 00:05:34.999 user 0m17.185s 00:05:34.999 sys 0m2.137s 00:05:34.999 20:04:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.999 20:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.999 ************************************ 00:05:34.999 END TEST json_config 00:05:34.999 ************************************ 00:05:34.999 20:04:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.999 20:04:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.999 20:04:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.999 20:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.999 20:04:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.999 ************************************ 00:05:34.999 START TEST json_config_extra_key 00:05:34.999 ************************************ 00:05:34.999 20:04:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.999 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.999 20:04:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.000 20:04:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.000 20:04:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.000 20:04:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.000 20:04:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.000 20:04:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.000 20:04:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.000 20:04:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.000 20:04:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.000 20:04:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.000 INFO: launching applications... 00:05:35.000 20:04:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4040934 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.000 Waiting for target to run... 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4040934 /var/tmp/spdk_tgt.sock 00:05:35.000 20:04:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 4040934 ']' 00:05:35.000 20:04:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:35.000 20:04:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.000 20:04:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.000 20:04:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.000 20:04:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.000 20:04:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.000 [2024-07-15 20:04:00.322145] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:35.000 [2024-07-15 20:04:00.322203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040934 ] 00:05:35.260 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.520 [2024-07-15 20:04:00.629056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.520 [2024-07-15 20:04:00.711032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.088 20:04:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.088 20:04:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.088 00:05:36.088 20:04:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.088 INFO: shutting down applications... 00:05:36.088 20:04:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4040934 ]] 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4040934 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4040934 00:05:36.088 20:04:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4040934 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:36.656 20:04:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:36.656 SPDK target shutdown done 00:05:36.656 20:04:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:36.656 Success 00:05:36.656 00:05:36.656 real 0m1.584s 00:05:36.656 user 0m1.500s 00:05:36.656 sys 0m0.401s 00:05:36.656 20:04:01 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.656 20:04:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:36.656 ************************************ 00:05:36.656 END TEST json_config_extra_key 00:05:36.656 ************************************ 00:05:36.656 20:04:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.656 20:04:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.656 20:04:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.656 20:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.656 20:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:36.656 ************************************ 00:05:36.656 START TEST alias_rpc 00:05:36.656 ************************************ 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:36.656 * Looking for test storage... 00:05:36.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:36.656 20:04:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:36.656 20:04:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4041245 00:05:36.656 20:04:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4041245 00:05:36.656 20:04:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 4041245 ']' 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.656 20:04:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.656 [2024-07-15 20:04:01.971065] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:36.656 [2024-07-15 20:04:01.971111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041245 ] 00:05:36.656 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.915 [2024-07-15 20:04:02.039974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.915 [2024-07-15 20:04:02.130244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.174 20:04:02 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.174 20:04:02 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:37.174 20:04:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:37.433 20:04:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4041245 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 4041245 ']' 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 4041245 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041245 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041245' 00:05:37.433 killing process with pid 4041245 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@967 -- # kill 4041245 00:05:37.433 20:04:02 alias_rpc -- common/autotest_common.sh@972 -- # wait 4041245 00:05:37.692 00:05:37.692 real 0m1.087s 00:05:37.692 user 0m1.106s 00:05:37.692 sys 0m0.422s 00:05:37.692 20:04:02 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.692 20:04:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 ************************************ 00:05:37.692 END TEST alias_rpc 00:05:37.692 ************************************ 00:05:37.692 20:04:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.692 20:04:02 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:37.692 20:04:02 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.692 20:04:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.692 20:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.692 20:04:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.692 ************************************ 00:05:37.692 START TEST spdkcli_tcp 00:05:37.692 ************************************ 00:05:37.692 20:04:02 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.951 * Looking for test storage... 00:05:37.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.951 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.951 20:04:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.951 20:04:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.952 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4041554 00:05:37.952 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4041554 00:05:37.952 20:04:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 4041554 ']' 00:05:37.952 20:04:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.952 20:04:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.952 20:04:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.952 20:04:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.952 20:04:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.952 20:04:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.952 [2024-07-15 20:04:03.140087] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:37.952 [2024-07-15 20:04:03.140144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041554 ] 00:05:37.952 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.952 [2024-07-15 20:04:03.221864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.210 [2024-07-15 20:04:03.313684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.210 [2024-07-15 20:04:03.313689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.777 20:04:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.777 20:04:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:38.777 20:04:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4041810 00:05:38.777 20:04:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.777 20:04:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:39.036 [ 00:05:39.036 "bdev_malloc_delete", 00:05:39.036 "bdev_malloc_create", 00:05:39.036 "bdev_null_resize", 00:05:39.036 "bdev_null_delete", 00:05:39.036 "bdev_null_create", 00:05:39.036 "bdev_nvme_cuse_unregister", 00:05:39.036 "bdev_nvme_cuse_register", 00:05:39.036 "bdev_opal_new_user", 00:05:39.036 "bdev_opal_set_lock_state", 00:05:39.036 "bdev_opal_delete", 00:05:39.036 "bdev_opal_get_info", 00:05:39.036 "bdev_opal_create", 00:05:39.036 "bdev_nvme_opal_revert", 00:05:39.036 "bdev_nvme_opal_init", 00:05:39.036 "bdev_nvme_send_cmd", 00:05:39.036 "bdev_nvme_get_path_iostat", 00:05:39.036 "bdev_nvme_get_mdns_discovery_info", 00:05:39.036 "bdev_nvme_stop_mdns_discovery", 00:05:39.036 "bdev_nvme_start_mdns_discovery", 00:05:39.036 "bdev_nvme_set_multipath_policy", 00:05:39.036 "bdev_nvme_set_preferred_path", 00:05:39.036 "bdev_nvme_get_io_paths", 00:05:39.036 "bdev_nvme_remove_error_injection", 00:05:39.036 "bdev_nvme_add_error_injection", 00:05:39.036 "bdev_nvme_get_discovery_info", 00:05:39.036 "bdev_nvme_stop_discovery", 00:05:39.036 "bdev_nvme_start_discovery", 00:05:39.036 "bdev_nvme_get_controller_health_info", 00:05:39.036 "bdev_nvme_disable_controller", 00:05:39.036 "bdev_nvme_enable_controller", 00:05:39.036 "bdev_nvme_reset_controller", 00:05:39.036 "bdev_nvme_get_transport_statistics", 00:05:39.036 "bdev_nvme_apply_firmware", 00:05:39.036 "bdev_nvme_detach_controller", 00:05:39.036 "bdev_nvme_get_controllers", 00:05:39.036 "bdev_nvme_attach_controller", 00:05:39.036 "bdev_nvme_set_hotplug", 00:05:39.036 "bdev_nvme_set_options", 00:05:39.036 "bdev_passthru_delete", 00:05:39.036 "bdev_passthru_create", 00:05:39.036 "bdev_lvol_set_parent_bdev", 00:05:39.036 "bdev_lvol_set_parent", 00:05:39.036 "bdev_lvol_check_shallow_copy", 00:05:39.036 "bdev_lvol_start_shallow_copy", 00:05:39.036 "bdev_lvol_grow_lvstore", 00:05:39.036 "bdev_lvol_get_lvols", 00:05:39.036 "bdev_lvol_get_lvstores", 00:05:39.036 "bdev_lvol_delete", 00:05:39.036 "bdev_lvol_set_read_only", 00:05:39.036 "bdev_lvol_resize", 00:05:39.036 "bdev_lvol_decouple_parent", 00:05:39.036 "bdev_lvol_inflate", 00:05:39.036 "bdev_lvol_rename", 00:05:39.036 "bdev_lvol_clone_bdev", 00:05:39.036 "bdev_lvol_clone", 00:05:39.036 "bdev_lvol_snapshot", 00:05:39.036 "bdev_lvol_create", 00:05:39.036 "bdev_lvol_delete_lvstore", 00:05:39.036 "bdev_lvol_rename_lvstore", 00:05:39.036 "bdev_lvol_create_lvstore", 00:05:39.036 "bdev_raid_set_options", 00:05:39.036 "bdev_raid_remove_base_bdev", 00:05:39.036 "bdev_raid_add_base_bdev", 00:05:39.036 "bdev_raid_delete", 00:05:39.036 "bdev_raid_create", 00:05:39.036 "bdev_raid_get_bdevs", 00:05:39.036 "bdev_error_inject_error", 00:05:39.036 "bdev_error_delete", 00:05:39.036 "bdev_error_create", 00:05:39.036 "bdev_split_delete", 00:05:39.036 "bdev_split_create", 00:05:39.036 "bdev_delay_delete", 00:05:39.036 "bdev_delay_create", 00:05:39.036 "bdev_delay_update_latency", 00:05:39.036 "bdev_zone_block_delete", 00:05:39.036 "bdev_zone_block_create", 00:05:39.036 "blobfs_create", 00:05:39.036 "blobfs_detect", 00:05:39.036 "blobfs_set_cache_size", 00:05:39.036 "bdev_aio_delete", 00:05:39.036 "bdev_aio_rescan", 00:05:39.036 "bdev_aio_create", 00:05:39.036 "bdev_ftl_set_property", 00:05:39.036 "bdev_ftl_get_properties", 00:05:39.036 "bdev_ftl_get_stats", 00:05:39.036 "bdev_ftl_unmap", 00:05:39.036 "bdev_ftl_unload", 00:05:39.036 "bdev_ftl_delete", 00:05:39.036 "bdev_ftl_load", 00:05:39.036 "bdev_ftl_create", 00:05:39.036 "bdev_virtio_attach_controller", 00:05:39.036 "bdev_virtio_scsi_get_devices", 00:05:39.036 "bdev_virtio_detach_controller", 00:05:39.036 "bdev_virtio_blk_set_hotplug", 00:05:39.036 "bdev_iscsi_delete", 00:05:39.036 "bdev_iscsi_create", 00:05:39.036 "bdev_iscsi_set_options", 00:05:39.036 "accel_error_inject_error", 00:05:39.036 "ioat_scan_accel_module", 00:05:39.036 "dsa_scan_accel_module", 00:05:39.036 "iaa_scan_accel_module", 00:05:39.036 "vfu_virtio_create_scsi_endpoint", 00:05:39.036 "vfu_virtio_scsi_remove_target", 00:05:39.036 "vfu_virtio_scsi_add_target", 00:05:39.036 "vfu_virtio_create_blk_endpoint", 00:05:39.036 "vfu_virtio_delete_endpoint", 00:05:39.036 "keyring_file_remove_key", 00:05:39.036 "keyring_file_add_key", 00:05:39.036 "keyring_linux_set_options", 00:05:39.036 "iscsi_get_histogram", 00:05:39.036 "iscsi_enable_histogram", 00:05:39.036 "iscsi_set_options", 00:05:39.036 "iscsi_get_auth_groups", 00:05:39.036 "iscsi_auth_group_remove_secret", 00:05:39.036 "iscsi_auth_group_add_secret", 00:05:39.036 "iscsi_delete_auth_group", 00:05:39.036 "iscsi_create_auth_group", 00:05:39.036 "iscsi_set_discovery_auth", 00:05:39.036 "iscsi_get_options", 00:05:39.036 "iscsi_target_node_request_logout", 00:05:39.036 "iscsi_target_node_set_redirect", 00:05:39.036 "iscsi_target_node_set_auth", 00:05:39.036 "iscsi_target_node_add_lun", 00:05:39.036 "iscsi_get_stats", 00:05:39.036 "iscsi_get_connections", 00:05:39.036 "iscsi_portal_group_set_auth", 00:05:39.036 "iscsi_start_portal_group", 00:05:39.036 "iscsi_delete_portal_group", 00:05:39.036 "iscsi_create_portal_group", 00:05:39.036 "iscsi_get_portal_groups", 00:05:39.036 "iscsi_delete_target_node", 00:05:39.036 "iscsi_target_node_remove_pg_ig_maps", 00:05:39.036 "iscsi_target_node_add_pg_ig_maps", 00:05:39.036 "iscsi_create_target_node", 00:05:39.036 "iscsi_get_target_nodes", 00:05:39.036 "iscsi_delete_initiator_group", 00:05:39.036 "iscsi_initiator_group_remove_initiators", 00:05:39.036 "iscsi_initiator_group_add_initiators", 00:05:39.036 "iscsi_create_initiator_group", 00:05:39.036 "iscsi_get_initiator_groups", 00:05:39.036 "nvmf_set_crdt", 00:05:39.036 "nvmf_set_config", 00:05:39.036 "nvmf_set_max_subsystems", 00:05:39.036 "nvmf_stop_mdns_prr", 00:05:39.036 "nvmf_publish_mdns_prr", 00:05:39.036 "nvmf_subsystem_get_listeners", 00:05:39.036 "nvmf_subsystem_get_qpairs", 00:05:39.036 "nvmf_subsystem_get_controllers", 00:05:39.036 "nvmf_get_stats", 00:05:39.036 "nvmf_get_transports", 00:05:39.036 "nvmf_create_transport", 00:05:39.036 "nvmf_get_targets", 00:05:39.036 "nvmf_delete_target", 00:05:39.036 "nvmf_create_target", 00:05:39.036 "nvmf_subsystem_allow_any_host", 00:05:39.036 "nvmf_subsystem_remove_host", 00:05:39.036 "nvmf_subsystem_add_host", 00:05:39.036 "nvmf_ns_remove_host", 00:05:39.036 "nvmf_ns_add_host", 00:05:39.036 "nvmf_subsystem_remove_ns", 00:05:39.036 "nvmf_subsystem_add_ns", 00:05:39.036 "nvmf_subsystem_listener_set_ana_state", 00:05:39.036 "nvmf_discovery_get_referrals", 00:05:39.036 "nvmf_discovery_remove_referral", 00:05:39.036 "nvmf_discovery_add_referral", 00:05:39.036 "nvmf_subsystem_remove_listener", 00:05:39.037 "nvmf_subsystem_add_listener", 00:05:39.037 "nvmf_delete_subsystem", 00:05:39.037 "nvmf_create_subsystem", 00:05:39.037 "nvmf_get_subsystems", 00:05:39.037 "env_dpdk_get_mem_stats", 00:05:39.037 "nbd_get_disks", 00:05:39.037 "nbd_stop_disk", 00:05:39.037 "nbd_start_disk", 00:05:39.037 "ublk_recover_disk", 00:05:39.037 "ublk_get_disks", 00:05:39.037 "ublk_stop_disk", 00:05:39.037 "ublk_start_disk", 00:05:39.037 "ublk_destroy_target", 00:05:39.037 "ublk_create_target", 00:05:39.037 "virtio_blk_create_transport", 00:05:39.037 "virtio_blk_get_transports", 00:05:39.037 "vhost_controller_set_coalescing", 00:05:39.037 "vhost_get_controllers", 00:05:39.037 "vhost_delete_controller", 00:05:39.037 "vhost_create_blk_controller", 00:05:39.037 "vhost_scsi_controller_remove_target", 00:05:39.037 "vhost_scsi_controller_add_target", 00:05:39.037 "vhost_start_scsi_controller", 00:05:39.037 "vhost_create_scsi_controller", 00:05:39.037 "thread_set_cpumask", 00:05:39.037 "framework_get_governor", 00:05:39.037 "framework_get_scheduler", 00:05:39.037 "framework_set_scheduler", 00:05:39.037 "framework_get_reactors", 00:05:39.037 "thread_get_io_channels", 00:05:39.037 "thread_get_pollers", 00:05:39.037 "thread_get_stats", 00:05:39.037 "framework_monitor_context_switch", 00:05:39.037 "spdk_kill_instance", 00:05:39.037 "log_enable_timestamps", 00:05:39.037 "log_get_flags", 00:05:39.037 "log_clear_flag", 00:05:39.037 "log_set_flag", 00:05:39.037 "log_get_level", 00:05:39.037 "log_set_level", 00:05:39.037 "log_get_print_level", 00:05:39.037 "log_set_print_level", 00:05:39.037 "framework_enable_cpumask_locks", 00:05:39.037 "framework_disable_cpumask_locks", 00:05:39.037 "framework_wait_init", 00:05:39.037 "framework_start_init", 00:05:39.037 "scsi_get_devices", 00:05:39.037 "bdev_get_histogram", 00:05:39.037 "bdev_enable_histogram", 00:05:39.037 "bdev_set_qos_limit", 00:05:39.037 "bdev_set_qd_sampling_period", 00:05:39.037 "bdev_get_bdevs", 00:05:39.037 "bdev_reset_iostat", 00:05:39.037 "bdev_get_iostat", 00:05:39.037 "bdev_examine", 00:05:39.037 "bdev_wait_for_examine", 00:05:39.037 "bdev_set_options", 00:05:39.037 "notify_get_notifications", 00:05:39.037 "notify_get_types", 00:05:39.037 "accel_get_stats", 00:05:39.037 "accel_set_options", 00:05:39.037 "accel_set_driver", 00:05:39.037 "accel_crypto_key_destroy", 00:05:39.037 "accel_crypto_keys_get", 00:05:39.037 "accel_crypto_key_create", 00:05:39.037 "accel_assign_opc", 00:05:39.037 "accel_get_module_info", 00:05:39.037 "accel_get_opc_assignments", 00:05:39.037 "vmd_rescan", 00:05:39.037 "vmd_remove_device", 00:05:39.037 "vmd_enable", 00:05:39.037 "sock_get_default_impl", 00:05:39.037 "sock_set_default_impl", 00:05:39.037 "sock_impl_set_options", 00:05:39.037 "sock_impl_get_options", 00:05:39.037 "iobuf_get_stats", 00:05:39.037 "iobuf_set_options", 00:05:39.037 "keyring_get_keys", 00:05:39.037 "framework_get_pci_devices", 00:05:39.037 "framework_get_config", 00:05:39.037 "framework_get_subsystems", 00:05:39.037 "vfu_tgt_set_base_path", 00:05:39.037 "trace_get_info", 00:05:39.037 "trace_get_tpoint_group_mask", 00:05:39.037 "trace_disable_tpoint_group", 00:05:39.037 "trace_enable_tpoint_group", 00:05:39.037 "trace_clear_tpoint_mask", 00:05:39.037 "trace_set_tpoint_mask", 00:05:39.037 "spdk_get_version", 00:05:39.037 "rpc_get_methods" 00:05:39.037 ] 00:05:39.037 20:04:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.037 20:04:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:39.037 20:04:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4041554 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 4041554 ']' 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 4041554 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041554 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.037 20:04:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041554' 00:05:39.037 killing process with pid 4041554 00:05:39.296 20:04:04 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 4041554 00:05:39.296 20:04:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 4041554 00:05:39.555 00:05:39.555 real 0m1.733s 00:05:39.555 user 0m3.366s 00:05:39.555 sys 0m0.464s 00:05:39.555 20:04:04 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.555 20:04:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.555 ************************************ 00:05:39.555 END TEST spdkcli_tcp 00:05:39.555 ************************************ 00:05:39.555 20:04:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.555 20:04:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.555 20:04:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.555 20:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.555 20:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:39.555 ************************************ 00:05:39.555 START TEST dpdk_mem_utility 00:05:39.555 ************************************ 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.555 * Looking for test storage... 00:05:39.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:39.555 20:04:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.555 20:04:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4041896 00:05:39.555 20:04:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4041896 00:05:39.555 20:04:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 4041896 ']' 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.555 20:04:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.815 [2024-07-15 20:04:04.934142] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:39.815 [2024-07-15 20:04:04.934204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041896 ] 00:05:39.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.815 [2024-07-15 20:04:05.015092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.815 [2024-07-15 20:04:05.105342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.752 20:04:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.752 20:04:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:40.752 20:04:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.752 20:04:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.752 20:04:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.753 20:04:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:40.753 { 00:05:40.753 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.753 } 00:05:40.753 20:04:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.753 20:04:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.753 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:40.753 1 heaps totaling size 814.000000 MiB 00:05:40.753 size: 814.000000 MiB heap id: 0 00:05:40.753 end heaps---------- 00:05:40.753 8 mempools totaling size 598.116089 MiB 00:05:40.753 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.753 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.753 size: 84.521057 MiB name: bdev_io_4041896 00:05:40.753 size: 51.011292 MiB name: evtpool_4041896 00:05:40.753 size: 50.003479 MiB name: msgpool_4041896 00:05:40.753 size: 21.763794 MiB name: PDU_Pool 00:05:40.753 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.753 size: 0.026123 MiB name: Session_Pool 00:05:40.753 end mempools------- 00:05:40.753 6 memzones totaling size 4.142822 MiB 00:05:40.753 size: 1.000366 MiB name: RG_ring_0_4041896 00:05:40.753 size: 1.000366 MiB name: RG_ring_1_4041896 00:05:40.753 size: 1.000366 MiB name: RG_ring_4_4041896 00:05:40.753 size: 1.000366 MiB name: RG_ring_5_4041896 00:05:40.753 size: 0.125366 MiB name: RG_ring_2_4041896 00:05:40.753 size: 0.015991 MiB name: RG_ring_3_4041896 00:05:40.753 end memzones------- 00:05:40.753 20:04:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.753 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:40.753 list of free elements. size: 12.519348 MiB 00:05:40.753 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.753 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:40.753 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:40.753 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:40.753 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:40.753 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:40.753 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:40.753 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:40.753 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:40.753 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:40.753 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:40.753 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:40.753 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:40.753 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:40.753 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:40.753 list of standard malloc elements. size: 199.218079 MiB 00:05:40.753 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:40.753 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:40.753 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:40.753 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:40.753 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:40.753 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.753 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:40.753 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.753 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:40.753 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:40.753 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:40.753 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:40.753 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:40.753 list of memzone associated elements. size: 602.262573 MiB 00:05:40.753 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:40.753 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.753 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:40.753 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.753 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:40.753 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4041896_0 00:05:40.753 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.753 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4041896_0 00:05:40.753 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.753 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4041896_0 00:05:40.753 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:40.753 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.753 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:40.753 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.753 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.753 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4041896 00:05:40.753 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.753 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4041896 00:05:40.753 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.753 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4041896 00:05:40.753 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:40.753 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.753 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:40.753 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.753 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:40.753 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.753 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:40.753 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.753 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.753 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4041896 00:05:40.753 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.753 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4041896 00:05:40.753 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:40.753 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4041896 00:05:40.753 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:40.753 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4041896 00:05:40.753 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:40.753 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4041896 00:05:40.753 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:40.753 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.753 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:40.753 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.753 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:40.753 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.753 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:40.753 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4041896 00:05:40.753 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:40.753 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.754 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:40.754 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.754 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:40.754 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4041896 00:05:40.754 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:40.754 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.754 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:40.754 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4041896 00:05:40.754 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:40.754 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4041896 00:05:40.754 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:40.754 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.754 20:04:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.754 20:04:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4041896 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 4041896 ']' 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 4041896 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041896 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041896' 00:05:40.754 killing process with pid 4041896 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 4041896 00:05:40.754 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 4041896 00:05:41.321 00:05:41.321 real 0m1.598s 00:05:41.321 user 0m1.797s 00:05:41.321 sys 0m0.439s 00:05:41.321 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.321 20:04:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.321 ************************************ 00:05:41.321 END TEST dpdk_mem_utility 00:05:41.321 ************************************ 00:05:41.321 20:04:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.321 20:04:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:41.321 20:04:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.321 20:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.321 20:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:41.321 ************************************ 00:05:41.321 START TEST event 00:05:41.321 ************************************ 00:05:41.321 20:04:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:41.321 * Looking for test storage... 00:05:41.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:41.321 20:04:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:41.321 20:04:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.321 20:04:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.321 20:04:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:41.321 20:04:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.321 20:04:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.321 ************************************ 00:05:41.321 START TEST event_perf 00:05:41.321 ************************************ 00:05:41.321 20:04:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.321 Running I/O for 1 seconds...[2024-07-15 20:04:06.604418] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:41.321 [2024-07-15 20:04:06.604493] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042235 ] 00:05:41.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.580 [2024-07-15 20:04:06.686967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.580 [2024-07-15 20:04:06.778906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.580 [2024-07-15 20:04:06.779006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.580 [2024-07-15 20:04:06.779109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.580 [2024-07-15 20:04:06.779113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.515 Running I/O for 1 seconds... 00:05:42.515 lcore 0: 170860 00:05:42.515 lcore 1: 170859 00:05:42.515 lcore 2: 170858 00:05:42.515 lcore 3: 170860 00:05:42.515 done. 00:05:42.515 00:05:42.515 real 0m1.273s 00:05:42.515 user 0m4.178s 00:05:42.515 sys 0m0.091s 00:05:42.515 20:04:07 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.515 20:04:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.515 ************************************ 00:05:42.515 END TEST event_perf 00:05:42.515 ************************************ 00:05:42.773 20:04:07 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.773 20:04:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.773 20:04:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:42.773 20:04:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.773 20:04:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.773 ************************************ 00:05:42.773 START TEST event_reactor 00:05:42.773 ************************************ 00:05:42.773 20:04:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.773 [2024-07-15 20:04:07.947249] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:42.773 [2024-07-15 20:04:07.947322] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042505 ] 00:05:42.773 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.773 [2024-07-15 20:04:08.028438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.773 [2024-07-15 20:04:08.115810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.148 test_start 00:05:44.148 oneshot 00:05:44.148 tick 100 00:05:44.148 tick 100 00:05:44.148 tick 250 00:05:44.148 tick 100 00:05:44.148 tick 100 00:05:44.148 tick 100 00:05:44.148 tick 250 00:05:44.148 tick 500 00:05:44.148 tick 100 00:05:44.148 tick 100 00:05:44.148 tick 250 00:05:44.148 tick 100 00:05:44.148 tick 100 00:05:44.148 test_end 00:05:44.148 00:05:44.148 real 0m1.266s 00:05:44.148 user 0m1.173s 00:05:44.148 sys 0m0.087s 00:05:44.148 20:04:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.148 20:04:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:44.148 ************************************ 00:05:44.148 END TEST event_reactor 00:05:44.148 ************************************ 00:05:44.148 20:04:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:44.148 20:04:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.148 20:04:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:44.148 20:04:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.148 20:04:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.148 ************************************ 00:05:44.148 START TEST event_reactor_perf 00:05:44.148 ************************************ 00:05:44.148 20:04:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.148 [2024-07-15 20:04:09.278599] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:44.148 [2024-07-15 20:04:09.278673] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042785 ] 00:05:44.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.148 [2024-07-15 20:04:09.361688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.148 [2024-07-15 20:04:09.449806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.524 test_start 00:05:45.524 test_end 00:05:45.524 Performance: 311661 events per second 00:05:45.524 00:05:45.524 real 0m1.270s 00:05:45.524 user 0m1.170s 00:05:45.524 sys 0m0.093s 00:05:45.524 20:04:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.524 20:04:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.524 ************************************ 00:05:45.524 END TEST event_reactor_perf 00:05:45.524 ************************************ 00:05:45.524 20:04:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:45.524 20:04:10 event -- event/event.sh@49 -- # uname -s 00:05:45.524 20:04:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.524 20:04:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.524 20:04:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.524 20:04:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.524 20:04:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.524 ************************************ 00:05:45.524 START TEST event_scheduler 00:05:45.524 ************************************ 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.524 * Looking for test storage... 00:05:45.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:45.524 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.524 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4043095 00:05:45.524 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.524 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.524 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4043095 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 4043095 ']' 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.524 20:04:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.524 [2024-07-15 20:04:10.741825] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:45.524 [2024-07-15 20:04:10.741892] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043095 ] 00:05:45.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.524 [2024-07-15 20:04:10.799984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.524 [2024-07-15 20:04:10.875401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.524 [2024-07-15 20:04:10.875515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.524 [2024-07-15 20:04:10.875541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.524 [2024-07-15 20:04:10.875542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:45.782 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 [2024-07-15 20:04:10.964236] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:45.782 [2024-07-15 20:04:10.964259] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:45.782 [2024-07-15 20:04:10.964268] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:45.782 [2024-07-15 20:04:10.964273] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:45.782 [2024-07-15 20:04:10.964278] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.782 20:04:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.782 20:04:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 [2024-07-15 20:04:11.035123] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.782 20:04:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.782 20:04:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.782 20:04:11 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.782 20:04:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.782 20:04:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 ************************************ 00:05:45.782 START TEST scheduler_create_thread 00:05:45.782 ************************************ 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 2 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 3 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.782 4 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.782 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 5 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 6 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 7 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.783 8 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.783 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.041 9 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.041 10 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.041 20:04:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.977 20:04:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.977 00:05:46.977 real 0m1.170s 00:05:46.977 user 0m0.023s 00:05:46.977 sys 0m0.005s 00:05:46.977 20:04:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.977 20:04:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.977 ************************************ 00:05:46.977 END TEST scheduler_create_thread 00:05:46.977 ************************************ 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:46.977 20:04:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.977 20:04:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4043095 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 4043095 ']' 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 4043095 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4043095 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4043095' 00:05:46.977 killing process with pid 4043095 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 4043095 00:05:46.977 20:04:12 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 4043095 00:05:47.542 [2024-07-15 20:04:12.720483] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:47.542 00:05:47.542 real 0m2.288s 00:05:47.542 user 0m2.786s 00:05:47.542 sys 0m0.345s 00:05:47.542 20:04:12 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.542 20:04:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.542 ************************************ 00:05:47.542 END TEST event_scheduler 00:05:47.542 ************************************ 00:05:47.801 20:04:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.801 20:04:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:47.801 20:04:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:47.801 20:04:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.801 20:04:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.801 20:04:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.801 ************************************ 00:05:47.801 START TEST app_repeat 00:05:47.801 ************************************ 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4043663 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4043663' 00:05:47.801 Process app_repeat pid: 4043663 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:47.801 spdk_app_start Round 0 00:05:47.801 20:04:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4043663 /var/tmp/spdk-nbd.sock 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4043663 ']' 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.801 20:04:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.801 [2024-07-15 20:04:12.996925] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:05:47.801 [2024-07-15 20:04:12.996977] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043663 ] 00:05:47.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.801 [2024-07-15 20:04:13.077274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.060 [2024-07-15 20:04:13.171206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.060 [2024-07-15 20:04:13.171212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.060 20:04:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.060 20:04:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:48.060 20:04:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.317 Malloc0 00:05:48.317 20:04:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.574 Malloc1 00:05:48.574 20:04:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.574 20:04:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.832 /dev/nbd0 00:05:48.832 20:04:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.832 20:04:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.832 1+0 records in 00:05:48.832 1+0 records out 00:05:48.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022 s, 18.6 MB/s 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.832 20:04:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:48.832 20:04:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.832 20:04:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.832 20:04:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.091 /dev/nbd1 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.091 1+0 records in 00:05:49.091 1+0 records out 00:05:49.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240337 s, 17.0 MB/s 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:49.091 20:04:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.091 20:04:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.350 { 00:05:49.350 "nbd_device": "/dev/nbd0", 00:05:49.350 "bdev_name": "Malloc0" 00:05:49.350 }, 00:05:49.350 { 00:05:49.350 "nbd_device": "/dev/nbd1", 00:05:49.350 "bdev_name": "Malloc1" 00:05:49.350 } 00:05:49.350 ]' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.350 { 00:05:49.350 "nbd_device": "/dev/nbd0", 00:05:49.350 "bdev_name": "Malloc0" 00:05:49.350 }, 00:05:49.350 { 00:05:49.350 "nbd_device": "/dev/nbd1", 00:05:49.350 "bdev_name": "Malloc1" 00:05:49.350 } 00:05:49.350 ]' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.350 /dev/nbd1' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.350 /dev/nbd1' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.350 256+0 records in 00:05:49.350 256+0 records out 00:05:49.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00977633 s, 107 MB/s 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.350 256+0 records in 00:05:49.350 256+0 records out 00:05:49.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191887 s, 54.6 MB/s 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.350 20:04:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.609 256+0 records in 00:05:49.609 256+0 records out 00:05:49.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208618 s, 50.3 MB/s 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.609 20:04:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.867 20:04:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.125 20:04:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.125 20:04:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.125 20:04:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.384 20:04:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.384 20:04:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.642 20:04:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.643 [2024-07-15 20:04:15.965695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.901 [2024-07-15 20:04:16.047593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.901 [2024-07-15 20:04:16.047599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.901 [2024-07-15 20:04:16.092884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.901 [2024-07-15 20:04:16.092928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.436 20:04:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.436 20:04:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.436 spdk_app_start Round 1 00:05:53.436 20:04:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4043663 /var/tmp/spdk-nbd.sock 00:05:53.436 20:04:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4043663 ']' 00:05:53.436 20:04:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.436 20:04:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.436 20:04:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.436 20:04:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.436 20:04:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.694 20:04:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.694 20:04:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:53.694 20:04:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.954 Malloc0 00:05:53.954 20:04:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.213 Malloc1 00:05:54.213 20:04:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.213 20:04:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.473 /dev/nbd0 00:05:54.473 20:04:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.473 20:04:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.473 1+0 records in 00:05:54.473 1+0 records out 00:05:54.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235047 s, 17.4 MB/s 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.473 20:04:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.473 20:04:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.473 20:04:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.473 20:04:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.732 /dev/nbd1 00:05:54.732 20:04:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.732 20:04:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.732 20:04:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:54.732 20:04:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:54.732 20:04:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:54.732 20:04:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:54.732 20:04:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.990 1+0 records in 00:05:54.990 1+0 records out 00:05:54.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175838 s, 23.3 MB/s 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:54.990 20:04:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:54.990 20:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.990 20:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.990 20:04:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.990 20:04:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.990 20:04:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.990 20:04:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.990 { 00:05:54.990 "nbd_device": "/dev/nbd0", 00:05:54.990 "bdev_name": "Malloc0" 00:05:54.990 }, 00:05:54.990 { 00:05:54.990 "nbd_device": "/dev/nbd1", 00:05:54.990 "bdev_name": "Malloc1" 00:05:54.990 } 00:05:54.990 ]' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.249 { 00:05:55.249 "nbd_device": "/dev/nbd0", 00:05:55.249 "bdev_name": "Malloc0" 00:05:55.249 }, 00:05:55.249 { 00:05:55.249 "nbd_device": "/dev/nbd1", 00:05:55.249 "bdev_name": "Malloc1" 00:05:55.249 } 00:05:55.249 ]' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.249 /dev/nbd1' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.249 /dev/nbd1' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.249 256+0 records in 00:05:55.249 256+0 records out 00:05:55.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103569 s, 101 MB/s 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.249 256+0 records in 00:05:55.249 256+0 records out 00:05:55.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193328 s, 54.2 MB/s 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.249 256+0 records in 00:05:55.249 256+0 records out 00:05:55.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203889 s, 51.4 MB/s 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.249 20:04:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.508 20:04:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.767 20:04:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.025 20:04:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.025 20:04:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.284 20:04:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.543 [2024-07-15 20:04:21.711363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.543 [2024-07-15 20:04:21.794052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.543 [2024-07-15 20:04:21.794057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.543 [2024-07-15 20:04:21.839737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.543 [2024-07-15 20:04:21.839782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.832 20:04:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.832 20:04:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.832 spdk_app_start Round 2 00:05:59.832 20:04:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4043663 /var/tmp/spdk-nbd.sock 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4043663 ']' 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.832 20:04:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:59.832 20:04:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.832 Malloc0 00:05:59.833 20:04:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.833 Malloc1 00:05:59.833 20:04:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.833 20:04:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.833 /dev/nbd0 00:05:59.833 20:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.833 20:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.833 20:04:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.833 20:04:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.833 20:04:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.833 20:04:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.833 20:04:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.091 1+0 records in 00:06:00.091 1+0 records out 00:06:00.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191609 s, 21.4 MB/s 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.091 20:04:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.091 20:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.091 20:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.091 20:04:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.091 /dev/nbd1 00:06:00.349 20:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.349 20:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.349 1+0 records in 00:06:00.349 1+0 records out 00:06:00.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179531 s, 22.8 MB/s 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.349 20:04:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:00.350 20:04:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:00.350 20:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.350 20:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.350 20:04:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.350 20:04:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.350 20:04:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.607 { 00:06:00.607 "nbd_device": "/dev/nbd0", 00:06:00.607 "bdev_name": "Malloc0" 00:06:00.607 }, 00:06:00.607 { 00:06:00.607 "nbd_device": "/dev/nbd1", 00:06:00.607 "bdev_name": "Malloc1" 00:06:00.607 } 00:06:00.607 ]' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.607 { 00:06:00.607 "nbd_device": "/dev/nbd0", 00:06:00.607 "bdev_name": "Malloc0" 00:06:00.607 }, 00:06:00.607 { 00:06:00.607 "nbd_device": "/dev/nbd1", 00:06:00.607 "bdev_name": "Malloc1" 00:06:00.607 } 00:06:00.607 ]' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.607 /dev/nbd1' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.607 /dev/nbd1' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.607 256+0 records in 00:06:00.607 256+0 records out 00:06:00.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103306 s, 102 MB/s 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.607 20:04:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.607 256+0 records in 00:06:00.607 256+0 records out 00:06:00.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191314 s, 54.8 MB/s 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.608 256+0 records in 00:06:00.608 256+0 records out 00:06:00.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207535 s, 50.5 MB/s 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.608 20:04:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.866 20:04:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.124 20:04:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.383 20:04:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.383 20:04:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.641 20:04:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.899 [2024-07-15 20:04:27.001055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.899 [2024-07-15 20:04:27.082922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.899 [2024-07-15 20:04:27.082927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.899 [2024-07-15 20:04:27.128350] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.899 [2024-07-15 20:04:27.128394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.185 20:04:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4043663 /var/tmp/spdk-nbd.sock 00:06:05.185 20:04:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4043663 ']' 00:06:05.185 20:04:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.185 20:04:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.185 20:04:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.185 20:04:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.185 20:04:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:05.185 20:04:30 event.app_repeat -- event/event.sh@39 -- # killprocess 4043663 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 4043663 ']' 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 4043663 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4043663 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4043663' 00:06:05.185 killing process with pid 4043663 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@967 -- # kill 4043663 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@972 -- # wait 4043663 00:06:05.185 spdk_app_start is called in Round 0. 00:06:05.185 Shutdown signal received, stop current app iteration 00:06:05.185 Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 reinitialization... 00:06:05.185 spdk_app_start is called in Round 1. 00:06:05.185 Shutdown signal received, stop current app iteration 00:06:05.185 Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 reinitialization... 00:06:05.185 spdk_app_start is called in Round 2. 00:06:05.185 Shutdown signal received, stop current app iteration 00:06:05.185 Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 reinitialization... 00:06:05.185 spdk_app_start is called in Round 3. 00:06:05.185 Shutdown signal received, stop current app iteration 00:06:05.185 20:04:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:05.185 20:04:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:05.185 00:06:05.185 real 0m17.311s 00:06:05.185 user 0m38.202s 00:06:05.185 sys 0m2.738s 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.185 20:04:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.185 ************************************ 00:06:05.185 END TEST app_repeat 00:06:05.185 ************************************ 00:06:05.185 20:04:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:05.185 20:04:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:05.185 20:04:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.185 20:04:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.185 20:04:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.185 20:04:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.185 ************************************ 00:06:05.185 START TEST cpu_locks 00:06:05.185 ************************************ 00:06:05.185 20:04:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:05.186 * Looking for test storage... 00:06:05.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.186 20:04:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:05.186 20:04:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:05.186 20:04:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:05.186 20:04:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:05.186 20:04:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.186 20:04:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.186 20:04:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.186 ************************************ 00:06:05.186 START TEST default_locks 00:06:05.186 ************************************ 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4047036 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4047036 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4047036 ']' 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.186 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.186 [2024-07-15 20:04:30.504881] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:05.186 [2024-07-15 20:04:30.504932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047036 ] 00:06:05.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.445 [2024-07-15 20:04:30.586170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.445 [2024-07-15 20:04:30.678396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4047036 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4047036 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.704 lslocks: write error 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4047036 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 4047036 ']' 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 4047036 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.704 20:04:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4047036 00:06:05.704 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.704 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.704 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4047036' 00:06:05.704 killing process with pid 4047036 00:06:05.704 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 4047036 00:06:05.704 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 4047036 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4047036 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4047036 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4047036 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4047036 ']' 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4047036) - No such process 00:06:06.273 ERROR: process (pid: 4047036) is no longer running 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.273 00:06:06.273 real 0m0.923s 00:06:06.273 user 0m0.897s 00:06:06.273 sys 0m0.398s 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.273 20:04:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.273 ************************************ 00:06:06.273 END TEST default_locks 00:06:06.273 ************************************ 00:06:06.273 20:04:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:06.273 20:04:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.274 20:04:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.274 20:04:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.274 20:04:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.274 ************************************ 00:06:06.274 START TEST default_locks_via_rpc 00:06:06.274 ************************************ 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4047324 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4047324 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4047324 ']' 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.274 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.274 [2024-07-15 20:04:31.466296] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:06.274 [2024-07-15 20:04:31.466330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047324 ] 00:06:06.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.274 [2024-07-15 20:04:31.535448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.532 [2024-07-15 20:04:31.628451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4047324 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4047324 00:06:06.532 20:04:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4047324 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 4047324 ']' 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 4047324 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4047324 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4047324' 00:06:06.790 killing process with pid 4047324 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 4047324 00:06:06.790 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 4047324 00:06:07.401 00:06:07.401 real 0m1.001s 00:06:07.401 user 0m1.005s 00:06:07.401 sys 0m0.421s 00:06:07.401 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.401 20:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.401 ************************************ 00:06:07.401 END TEST default_locks_via_rpc 00:06:07.401 ************************************ 00:06:07.401 20:04:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:07.401 20:04:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.401 20:04:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.401 20:04:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.401 20:04:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.401 ************************************ 00:06:07.401 START TEST non_locking_app_on_locked_coremask 00:06:07.401 ************************************ 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4047477 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4047477 /var/tmp/spdk.sock 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4047477 ']' 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.401 20:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.401 [2024-07-15 20:04:32.552103] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:07.401 [2024-07-15 20:04:32.552159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047477 ] 00:06:07.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.401 [2024-07-15 20:04:32.633321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.687 [2024-07-15 20:04:32.725201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4047630 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4047630 /var/tmp/spdk2.sock 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4047630 ']' 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.253 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.254 20:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.254 [2024-07-15 20:04:33.568458] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:08.254 [2024-07-15 20:04:33.568568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047630 ] 00:06:08.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.512 [2024-07-15 20:04:33.710298] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.512 [2024-07-15 20:04:33.710325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.770 [2024-07-15 20:04:33.890597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.337 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.337 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.337 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4047477 00:06:09.337 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4047477 00:06:09.337 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.595 lslocks: write error 00:06:09.595 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4047477 00:06:09.595 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4047477 ']' 00:06:09.595 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4047477 00:06:09.595 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:09.595 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.596 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4047477 00:06:09.596 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.596 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.596 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4047477' 00:06:09.596 killing process with pid 4047477 00:06:09.596 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4047477 00:06:09.596 20:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4047477 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4047630 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4047630 ']' 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4047630 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4047630 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4047630' 00:06:10.161 killing process with pid 4047630 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4047630 00:06:10.161 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4047630 00:06:10.729 00:06:10.729 real 0m3.343s 00:06:10.729 user 0m3.752s 00:06:10.729 sys 0m0.977s 00:06:10.729 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.729 20:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.729 ************************************ 00:06:10.729 END TEST non_locking_app_on_locked_coremask 00:06:10.729 ************************************ 00:06:10.729 20:04:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.729 20:04:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:10.729 20:04:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.729 20:04:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.729 20:04:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.729 ************************************ 00:06:10.729 START TEST locking_app_on_unlocked_coremask 00:06:10.729 ************************************ 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4048188 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4048188 /var/tmp/spdk.sock 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4048188 ']' 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.729 20:04:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.729 [2024-07-15 20:04:35.966278] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:10.729 [2024-07-15 20:04:35.966336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048188 ] 00:06:10.729 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.729 [2024-07-15 20:04:36.048843] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.729 [2024-07-15 20:04:36.048873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.988 [2024-07-15 20:04:36.134905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4048367 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4048367 /var/tmp/spdk2.sock 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4048367 ']' 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.571 20:04:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.831 [2024-07-15 20:04:36.956602] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:11.831 [2024-07-15 20:04:36.956663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048367 ] 00:06:11.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.831 [2024-07-15 20:04:37.064893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.090 [2024-07-15 20:04:37.245441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.657 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.657 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.657 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4048367 00:06:12.657 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4048367 00:06:12.657 20:04:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.226 lslocks: write error 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4048188 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4048188 ']' 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4048188 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4048188 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4048188' 00:06:13.226 killing process with pid 4048188 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4048188 00:06:13.226 20:04:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4048188 00:06:13.795 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4048367 00:06:13.795 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4048367 ']' 00:06:13.795 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4048367 00:06:13.795 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.795 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.054 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4048367 00:06:14.054 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.054 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.054 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4048367' 00:06:14.054 killing process with pid 4048367 00:06:14.054 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4048367 00:06:14.054 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4048367 00:06:14.313 00:06:14.313 real 0m3.594s 00:06:14.313 user 0m4.030s 00:06:14.313 sys 0m1.031s 00:06:14.313 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.313 20:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.313 ************************************ 00:06:14.313 END TEST locking_app_on_unlocked_coremask 00:06:14.313 ************************************ 00:06:14.313 20:04:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.313 20:04:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:14.314 20:04:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.314 20:04:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.314 20:04:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.314 ************************************ 00:06:14.314 START TEST locking_app_on_locked_coremask 00:06:14.314 ************************************ 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4048792 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4048792 /var/tmp/spdk.sock 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4048792 ']' 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.314 20:04:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.314 [2024-07-15 20:04:39.621364] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:14.314 [2024-07-15 20:04:39.621419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048792 ] 00:06:14.314 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.573 [2024-07-15 20:04:39.702841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.573 [2024-07-15 20:04:39.794268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4049023 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4049023 /var/tmp/spdk2.sock 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4049023 /var/tmp/spdk2.sock 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.510 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4049023 /var/tmp/spdk2.sock 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4049023 ']' 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.511 20:04:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.511 [2024-07-15 20:04:40.605831] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:15.511 [2024-07-15 20:04:40.605878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049023 ] 00:06:15.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.511 [2024-07-15 20:04:40.704791] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4048792 has claimed it. 00:06:15.511 [2024-07-15 20:04:40.704842] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4049023) - No such process 00:06:16.078 ERROR: process (pid: 4049023) is no longer running 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4048792 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4048792 00:06:16.078 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.338 lslocks: write error 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4048792 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4048792 ']' 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4048792 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4048792 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4048792' 00:06:16.338 killing process with pid 4048792 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4048792 00:06:16.338 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4048792 00:06:16.597 00:06:16.597 real 0m2.354s 00:06:16.597 user 0m2.725s 00:06:16.597 sys 0m0.627s 00:06:16.597 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.597 20:04:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.597 ************************************ 00:06:16.597 END TEST locking_app_on_locked_coremask 00:06:16.597 ************************************ 00:06:16.856 20:04:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.856 20:04:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.856 20:04:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.856 20:04:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.856 20:04:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.856 ************************************ 00:06:16.856 START TEST locking_overlapped_coremask 00:06:16.856 ************************************ 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4049334 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4049334 /var/tmp/spdk.sock 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4049334 ']' 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.856 20:04:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.856 [2024-07-15 20:04:42.034845] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:16.857 [2024-07-15 20:04:42.034895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049334 ] 00:06:16.857 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.857 [2024-07-15 20:04:42.114571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.857 [2024-07-15 20:04:42.207899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.857 [2024-07-15 20:04:42.207999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.857 [2024-07-15 20:04:42.207999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4049394 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4049394 /var/tmp/spdk2.sock 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4049394 /var/tmp/spdk2.sock 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4049394 /var/tmp/spdk2.sock 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4049394 ']' 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.116 20:04:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.375 [2024-07-15 20:04:42.472654] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:17.375 [2024-07-15 20:04:42.472699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049394 ] 00:06:17.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.375 [2024-07-15 20:04:42.545344] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4049334 has claimed it. 00:06:17.375 [2024-07-15 20:04:42.545380] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4049394) - No such process 00:06:17.942 ERROR: process (pid: 4049394) is no longer running 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.942 20:04:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4049334 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 4049334 ']' 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 4049334 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4049334 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4049334' 00:06:17.943 killing process with pid 4049334 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 4049334 00:06:17.943 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 4049334 00:06:18.510 00:06:18.510 real 0m1.578s 00:06:18.510 user 0m4.286s 00:06:18.510 sys 0m0.405s 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.510 ************************************ 00:06:18.510 END TEST locking_overlapped_coremask 00:06:18.510 ************************************ 00:06:18.510 20:04:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.510 20:04:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.510 20:04:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.510 20:04:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.510 20:04:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.510 ************************************ 00:06:18.510 START TEST locking_overlapped_coremask_via_rpc 00:06:18.510 ************************************ 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4049630 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4049630 /var/tmp/spdk.sock 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4049630 ']' 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.510 20:04:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.510 [2024-07-15 20:04:43.684191] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:18.510 [2024-07-15 20:04:43.684243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049630 ] 00:06:18.510 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.510 [2024-07-15 20:04:43.765567] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.510 [2024-07-15 20:04:43.765597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.510 [2024-07-15 20:04:43.859217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.510 [2024-07-15 20:04:43.859341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.510 [2024-07-15 20:04:43.859346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4049894 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4049894 /var/tmp/spdk2.sock 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4049894 ']' 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.448 20:04:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.448 [2024-07-15 20:04:44.690226] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:19.448 [2024-07-15 20:04:44.690299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049894 ] 00:06:19.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.448 [2024-07-15 20:04:44.768488] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.448 [2024-07-15 20:04:44.768507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.707 [2024-07-15 20:04:44.911576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.707 [2024-07-15 20:04:44.911692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.707 [2024-07-15 20:04:44.911693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.644 [2024-07-15 20:04:45.652328] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4049630 has claimed it. 00:06:20.644 request: 00:06:20.644 { 00:06:20.644 "method": "framework_enable_cpumask_locks", 00:06:20.644 "req_id": 1 00:06:20.644 } 00:06:20.644 Got JSON-RPC error response 00:06:20.644 response: 00:06:20.644 { 00:06:20.644 "code": -32603, 00:06:20.644 "message": "Failed to claim CPU core: 2" 00:06:20.644 } 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4049630 /var/tmp/spdk.sock 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4049630 ']' 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.644 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4049894 /var/tmp/spdk2.sock 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4049894 ']' 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.645 20:04:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.904 00:06:20.904 real 0m2.557s 00:06:20.904 user 0m1.269s 00:06:20.904 sys 0m0.209s 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.904 20:04:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.904 ************************************ 00:06:20.904 END TEST locking_overlapped_coremask_via_rpc 00:06:20.904 ************************************ 00:06:20.904 20:04:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.904 20:04:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.904 20:04:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4049630 ]] 00:06:20.904 20:04:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4049630 00:06:20.904 20:04:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4049630 ']' 00:06:20.904 20:04:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4049630 00:06:20.904 20:04:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:20.904 20:04:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.904 20:04:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4049630 00:06:21.162 20:04:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.162 20:04:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.162 20:04:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4049630' 00:06:21.162 killing process with pid 4049630 00:06:21.162 20:04:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4049630 00:06:21.162 20:04:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4049630 00:06:21.420 20:04:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4049894 ]] 00:06:21.420 20:04:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4049894 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4049894 ']' 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4049894 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4049894 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4049894' 00:06:21.420 killing process with pid 4049894 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4049894 00:06:21.420 20:04:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4049894 00:06:21.677 20:04:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.677 20:04:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.677 20:04:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4049630 ]] 00:06:21.677 20:04:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4049630 00:06:21.677 20:04:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4049630 ']' 00:06:21.677 20:04:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4049630 00:06:21.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4049630) - No such process 00:06:21.678 20:04:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4049630 is not found' 00:06:21.678 Process with pid 4049630 is not found 00:06:21.678 20:04:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4049894 ]] 00:06:21.678 20:04:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4049894 00:06:21.678 20:04:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4049894 ']' 00:06:21.678 20:04:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4049894 00:06:21.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4049894) - No such process 00:06:21.678 20:04:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4049894 is not found' 00:06:21.678 Process with pid 4049894 is not found 00:06:21.678 20:04:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.678 00:06:21.678 real 0m16.623s 00:06:21.678 user 0m30.410s 00:06:21.678 sys 0m4.984s 00:06:21.678 20:04:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.678 20:04:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.678 ************************************ 00:06:21.678 END TEST cpu_locks 00:06:21.678 ************************************ 00:06:21.678 20:04:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.678 00:06:21.678 real 0m40.548s 00:06:21.678 user 1m18.104s 00:06:21.678 sys 0m8.702s 00:06:21.678 20:04:47 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.678 20:04:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.678 ************************************ 00:06:21.678 END TEST event 00:06:21.678 ************************************ 00:06:21.935 20:04:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.935 20:04:47 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:21.935 20:04:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.935 20:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.935 20:04:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.935 ************************************ 00:06:21.935 START TEST thread 00:06:21.935 ************************************ 00:06:21.935 20:04:47 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:21.935 * Looking for test storage... 00:06:21.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:21.935 20:04:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.935 20:04:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:21.935 20:04:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.935 20:04:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.935 ************************************ 00:06:21.935 START TEST thread_poller_perf 00:06:21.935 ************************************ 00:06:21.935 20:04:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.935 [2024-07-15 20:04:47.215597] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:21.935 [2024-07-15 20:04:47.215664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050511 ] 00:06:21.935 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.193 [2024-07-15 20:04:47.296745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.193 [2024-07-15 20:04:47.384171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.193 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.127 ====================================== 00:06:23.127 busy:2208041906 (cyc) 00:06:23.127 total_run_count: 255000 00:06:23.127 tsc_hz: 2200000000 (cyc) 00:06:23.127 ====================================== 00:06:23.127 poller_cost: 8658 (cyc), 3935 (nsec) 00:06:23.127 00:06:23.127 real 0m1.274s 00:06:23.127 user 0m1.168s 00:06:23.127 sys 0m0.100s 00:06:23.127 20:04:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.127 20:04:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.127 ************************************ 00:06:23.127 END TEST thread_poller_perf 00:06:23.127 ************************************ 00:06:23.385 20:04:48 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:23.385 20:04:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.385 20:04:48 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:23.385 20:04:48 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.385 20:04:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.385 ************************************ 00:06:23.385 START TEST thread_poller_perf 00:06:23.385 ************************************ 00:06:23.385 20:04:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.385 [2024-07-15 20:04:48.552499] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:23.385 [2024-07-15 20:04:48.552565] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050792 ] 00:06:23.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.385 [2024-07-15 20:04:48.632578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.385 [2024-07-15 20:04:48.719687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.385 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.813 ====================================== 00:06:24.813 busy:2202602688 (cyc) 00:06:24.813 total_run_count: 3379000 00:06:24.813 tsc_hz: 2200000000 (cyc) 00:06:24.813 ====================================== 00:06:24.813 poller_cost: 651 (cyc), 295 (nsec) 00:06:24.813 00:06:24.813 real 0m1.267s 00:06:24.813 user 0m1.172s 00:06:24.813 sys 0m0.089s 00:06:24.813 20:04:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.813 20:04:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 END TEST thread_poller_perf 00:06:24.813 ************************************ 00:06:24.813 20:04:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:24.813 20:04:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.813 00:06:24.813 real 0m2.753s 00:06:24.813 user 0m2.427s 00:06:24.813 sys 0m0.332s 00:06:24.813 20:04:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.813 20:04:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 END TEST thread 00:06:24.813 ************************************ 00:06:24.813 20:04:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.813 20:04:49 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:24.813 20:04:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.813 20:04:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.813 20:04:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 ************************************ 00:06:24.813 START TEST accel 00:06:24.813 ************************************ 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:24.813 * Looking for test storage... 00:06:24.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:24.813 20:04:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:24.813 20:04:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:24.813 20:04:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.813 20:04:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4051109 00:06:24.813 20:04:49 accel -- accel/accel.sh@63 -- # waitforlisten 4051109 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@829 -- # '[' -z 4051109 ']' 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.813 20:04:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.813 20:04:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:24.813 20:04:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:24.813 20:04:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.813 20:04:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.813 20:04:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.813 20:04:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.813 20:04:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.813 20:04:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:24.813 20:04:49 accel -- accel/accel.sh@41 -- # jq -r . 00:06:24.813 [2024-07-15 20:04:50.049834] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:24.813 [2024-07-15 20:04:50.049893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051109 ] 00:06:24.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.813 [2024-07-15 20:04:50.131612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.071 [2024-07-15 20:04:50.221560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@862 -- # return 0 00:06:26.008 20:04:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:26.008 20:04:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:26.008 20:04:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:26.008 20:04:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:26.008 20:04:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:26.008 20:04:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.008 20:04:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:26.008 20:04:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:26.008 20:04:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:26.008 20:04:51 accel -- accel/accel.sh@75 -- # killprocess 4051109 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@948 -- # '[' -z 4051109 ']' 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@952 -- # kill -0 4051109 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@953 -- # uname 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4051109 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4051109' 00:06:26.008 killing process with pid 4051109 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@967 -- # kill 4051109 00:06:26.008 20:04:51 accel -- common/autotest_common.sh@972 -- # wait 4051109 00:06:26.576 20:04:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:26.576 20:04:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.576 20:04:51 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:26.576 20:04:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:26.576 20:04:51 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.576 20:04:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.576 20:04:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.576 20:04:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.576 ************************************ 00:06:26.576 START TEST accel_missing_filename 00:06:26.576 ************************************ 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.576 20:04:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:26.576 20:04:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:26.576 [2024-07-15 20:04:51.802696] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:26.577 [2024-07-15 20:04:51.802757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051421 ] 00:06:26.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.577 [2024-07-15 20:04:51.884412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.835 [2024-07-15 20:04:51.972615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.835 [2024-07-15 20:04:52.017637] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.835 [2024-07-15 20:04:52.080619] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:26.835 A filename is required. 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.835 00:06:26.835 real 0m0.379s 00:06:26.835 user 0m0.271s 00:06:26.835 sys 0m0.126s 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.835 20:04:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:26.835 ************************************ 00:06:26.835 END TEST accel_missing_filename 00:06:26.835 ************************************ 00:06:26.835 20:04:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.835 20:04:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:26.835 20:04:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:26.835 20:04:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.835 20:04:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.094 ************************************ 00:06:27.094 START TEST accel_compress_verify 00:06:27.094 ************************************ 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.094 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.094 20:04:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.094 20:04:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:27.094 20:04:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.094 20:04:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.095 20:04:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.095 20:04:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.095 20:04:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.095 20:04:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:27.095 20:04:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:27.095 [2024-07-15 20:04:52.240499] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:27.095 [2024-07-15 20:04:52.240548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051451 ] 00:06:27.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.095 [2024-07-15 20:04:52.321068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.095 [2024-07-15 20:04:52.408132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.355 [2024-07-15 20:04:52.453244] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.355 [2024-07-15 20:04:52.516405] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:27.355 00:06:27.355 Compression does not support the verify option, aborting. 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.355 00:06:27.355 real 0m0.380s 00:06:27.355 user 0m0.280s 00:06:27.355 sys 0m0.140s 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.355 20:04:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 ************************************ 00:06:27.355 END TEST accel_compress_verify 00:06:27.355 ************************************ 00:06:27.355 20:04:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.355 20:04:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:27.355 20:04:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:27.355 20:04:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.355 20:04:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.355 ************************************ 00:06:27.355 START TEST accel_wrong_workload 00:06:27.355 ************************************ 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.355 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:27.355 20:04:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:27.355 Unsupported workload type: foobar 00:06:27.355 [2024-07-15 20:04:52.678087] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:27.355 accel_perf options: 00:06:27.355 [-h help message] 00:06:27.355 [-q queue depth per core] 00:06:27.355 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:27.355 [-T number of threads per core 00:06:27.355 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:27.355 [-t time in seconds] 00:06:27.355 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:27.355 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:27.356 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:27.356 [-l for compress/decompress workloads, name of uncompressed input file 00:06:27.356 [-S for crc32c workload, use this seed value (default 0) 00:06:27.356 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:27.356 [-f for fill workload, use this BYTE value (default 255) 00:06:27.356 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:27.356 [-y verify result if this switch is on] 00:06:27.356 [-a tasks to allocate per core (default: same value as -q)] 00:06:27.356 Can be used to spread operations across a wider range of memory. 00:06:27.356 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:27.356 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.356 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.356 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.356 00:06:27.356 real 0m0.029s 00:06:27.356 user 0m0.012s 00:06:27.356 sys 0m0.017s 00:06:27.356 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.356 20:04:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:27.356 ************************************ 00:06:27.356 END TEST accel_wrong_workload 00:06:27.356 ************************************ 00:06:27.356 Error: writing output failed: Broken pipe 00:06:27.616 20:04:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.616 20:04:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:27.616 20:04:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:27.616 20:04:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.616 20:04:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.616 ************************************ 00:06:27.616 START TEST accel_negative_buffers 00:06:27.616 ************************************ 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:27.616 20:04:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:27.616 -x option must be non-negative. 00:06:27.616 [2024-07-15 20:04:52.755641] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:27.616 accel_perf options: 00:06:27.616 [-h help message] 00:06:27.616 [-q queue depth per core] 00:06:27.616 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:27.616 [-T number of threads per core 00:06:27.616 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:27.616 [-t time in seconds] 00:06:27.616 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:27.616 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:27.616 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:27.616 [-l for compress/decompress workloads, name of uncompressed input file 00:06:27.616 [-S for crc32c workload, use this seed value (default 0) 00:06:27.616 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:27.616 [-f for fill workload, use this BYTE value (default 255) 00:06:27.616 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:27.616 [-y verify result if this switch is on] 00:06:27.616 [-a tasks to allocate per core (default: same value as -q)] 00:06:27.616 Can be used to spread operations across a wider range of memory. 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.616 00:06:27.616 real 0m0.029s 00:06:27.616 user 0m0.019s 00:06:27.616 sys 0m0.009s 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.616 20:04:52 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:27.617 ************************************ 00:06:27.617 END TEST accel_negative_buffers 00:06:27.617 ************************************ 00:06:27.617 Error: writing output failed: Broken pipe 00:06:27.617 20:04:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.617 20:04:52 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:27.617 20:04:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:27.617 20:04:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.617 20:04:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.617 ************************************ 00:06:27.617 START TEST accel_crc32c 00:06:27.617 ************************************ 00:06:27.617 20:04:52 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:27.617 20:04:52 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:27.617 [2024-07-15 20:04:52.828916] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:27.617 [2024-07-15 20:04:52.828949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051727 ] 00:06:27.617 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.617 [2024-07-15 20:04:52.897771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.876 [2024-07-15 20:04:52.988228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.876 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.877 20:04:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:29.251 20:04:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.251 00:06:29.251 real 0m1.362s 00:06:29.251 user 0m1.253s 00:06:29.251 sys 0m0.113s 00:06:29.251 20:04:54 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.251 20:04:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:29.251 ************************************ 00:06:29.251 END TEST accel_crc32c 00:06:29.251 ************************************ 00:06:29.251 20:04:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.251 20:04:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:29.251 20:04:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:29.251 20:04:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.251 20:04:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.251 ************************************ 00:06:29.251 START TEST accel_crc32c_C2 00:06:29.251 ************************************ 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:29.251 [2024-07-15 20:04:54.273412] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:29.251 [2024-07-15 20:04:54.273513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051999 ] 00:06:29.251 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.251 [2024-07-15 20:04:54.387769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.251 [2024-07-15 20:04:54.478723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.251 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.252 20:04:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.630 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.630 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.630 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.631 00:06:30.631 real 0m1.430s 00:06:30.631 user 0m1.275s 00:06:30.631 sys 0m0.160s 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.631 20:04:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:30.631 ************************************ 00:06:30.631 END TEST accel_crc32c_C2 00:06:30.631 ************************************ 00:06:30.631 20:04:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.631 20:04:55 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:30.631 20:04:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.631 20:04:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.631 20:04:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.631 ************************************ 00:06:30.631 START TEST accel_copy 00:06:30.631 ************************************ 00:06:30.631 20:04:55 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:30.631 [2024-07-15 20:04:55.758134] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:30.631 [2024-07-15 20:04:55.758201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052305 ] 00:06:30.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.631 [2024-07-15 20:04:55.838786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.631 [2024-07-15 20:04:55.928991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.631 20:04:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:32.011 20:04:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.011 00:06:32.011 real 0m1.389s 00:06:32.011 user 0m1.255s 00:06:32.011 sys 0m0.138s 00:06:32.011 20:04:57 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.011 20:04:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 ************************************ 00:06:32.011 END TEST accel_copy 00:06:32.011 ************************************ 00:06:32.011 20:04:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.011 20:04:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.011 20:04:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:32.011 20:04:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.011 20:04:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.011 ************************************ 00:06:32.011 START TEST accel_fill 00:06:32.011 ************************************ 00:06:32.011 20:04:57 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:32.011 20:04:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:32.011 [2024-07-15 20:04:57.207025] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:32.011 [2024-07-15 20:04:57.207091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052587 ] 00:06:32.011 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.011 [2024-07-15 20:04:57.288664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.271 [2024-07-15 20:04:57.378676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.271 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:32.272 20:04:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:33.651 20:04:58 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.651 00:06:33.651 real 0m1.389s 00:06:33.651 user 0m0.006s 00:06:33.651 sys 0m0.001s 00:06:33.651 20:04:58 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.651 20:04:58 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:33.651 ************************************ 00:06:33.651 END TEST accel_fill 00:06:33.651 ************************************ 00:06:33.651 20:04:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.651 20:04:58 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:33.651 20:04:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.651 20:04:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.651 20:04:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.651 ************************************ 00:06:33.651 START TEST accel_copy_crc32c 00:06:33.651 ************************************ 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:33.651 [2024-07-15 20:04:58.655259] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:33.651 [2024-07-15 20:04:58.655309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052866 ] 00:06:33.651 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.651 [2024-07-15 20:04:58.736479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.651 [2024-07-15 20:04:58.826606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.651 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.652 20:04:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.031 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.032 00:06:35.032 real 0m1.387s 00:06:35.032 user 0m0.008s 00:06:35.032 sys 0m0.000s 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.032 20:05:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 ************************************ 00:06:35.032 END TEST accel_copy_crc32c 00:06:35.032 ************************************ 00:06:35.032 20:05:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.032 20:05:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:35.032 20:05:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:35.032 20:05:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.032 20:05:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.032 ************************************ 00:06:35.032 START TEST accel_copy_crc32c_C2 00:06:35.032 ************************************ 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:35.032 [2024-07-15 20:05:00.102896] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:35.032 [2024-07-15 20:05:00.102967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053154 ] 00:06:35.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.032 [2024-07-15 20:05:00.183249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.032 [2024-07-15 20:05:00.270888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.032 20:05:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.412 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.413 00:06:36.413 real 0m1.383s 00:06:36.413 user 0m0.006s 00:06:36.413 sys 0m0.002s 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.413 20:05:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:36.413 ************************************ 00:06:36.413 END TEST accel_copy_crc32c_C2 00:06:36.413 ************************************ 00:06:36.413 20:05:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.413 20:05:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:36.413 20:05:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:36.413 20:05:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.413 20:05:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.413 ************************************ 00:06:36.413 START TEST accel_dualcast 00:06:36.413 ************************************ 00:06:36.413 20:05:01 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:36.413 [2024-07-15 20:05:01.544854] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:36.413 [2024-07-15 20:05:01.544907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053435 ] 00:06:36.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.413 [2024-07-15 20:05:01.625581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.413 [2024-07-15 20:05:01.712758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.413 20:05:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:37.793 20:05:02 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.793 00:06:37.793 real 0m1.384s 00:06:37.793 user 0m1.251s 00:06:37.793 sys 0m0.134s 00:06:37.793 20:05:02 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.793 20:05:02 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:37.793 ************************************ 00:06:37.793 END TEST accel_dualcast 00:06:37.793 ************************************ 00:06:37.793 20:05:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.793 20:05:02 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:37.793 20:05:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:37.793 20:05:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.793 20:05:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.793 ************************************ 00:06:37.793 START TEST accel_compare 00:06:37.793 ************************************ 00:06:37.793 20:05:02 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:37.793 20:05:02 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:37.793 [2024-07-15 20:05:02.987269] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:37.793 [2024-07-15 20:05:02.987321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053729 ] 00:06:37.793 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.793 [2024-07-15 20:05:03.066383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.053 [2024-07-15 20:05:03.153757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.053 20:05:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:38.999 20:05:04 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.999 00:06:38.999 real 0m1.382s 00:06:38.999 user 0m1.257s 00:06:38.999 sys 0m0.129s 00:06:38.999 20:05:04 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.999 20:05:04 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:38.999 ************************************ 00:06:38.999 END TEST accel_compare 00:06:38.999 ************************************ 00:06:39.274 20:05:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.274 20:05:04 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:39.274 20:05:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.274 20:05:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.274 20:05:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.274 ************************************ 00:06:39.274 START TEST accel_xor 00:06:39.274 ************************************ 00:06:39.274 20:05:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.274 20:05:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.275 20:05:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.275 20:05:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.275 20:05:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.275 20:05:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:39.275 20:05:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:39.275 [2024-07-15 20:05:04.437007] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:39.275 [2024-07-15 20:05:04.437077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054017 ] 00:06:39.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.275 [2024-07-15 20:05:04.518388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.275 [2024-07-15 20:05:04.604914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:05:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.483 20:05:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:40.484 20:05:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.484 00:06:40.484 real 0m1.387s 00:06:40.484 user 0m0.006s 00:06:40.484 sys 0m0.003s 00:06:40.484 20:05:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.484 20:05:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:40.484 ************************************ 00:06:40.484 END TEST accel_xor 00:06:40.484 ************************************ 00:06:40.484 20:05:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.484 20:05:05 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:40.484 20:05:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:40.484 20:05:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.484 20:05:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.743 ************************************ 00:06:40.743 START TEST accel_xor 00:06:40.743 ************************************ 00:06:40.743 20:05:05 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:40.743 20:05:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:40.743 [2024-07-15 20:05:05.877992] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:40.743 [2024-07-15 20:05:05.878042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054299 ] 00:06:40.743 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.743 [2024-07-15 20:05:05.958307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.743 [2024-07-15 20:05:06.046335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.743 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:40.744 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.744 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.744 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.003 20:05:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:41.941 20:05:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.941 00:06:41.941 real 0m1.383s 00:06:41.941 user 0m0.006s 00:06:41.941 sys 0m0.002s 00:06:41.941 20:05:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.941 20:05:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:41.941 ************************************ 00:06:41.941 END TEST accel_xor 00:06:41.941 ************************************ 00:06:41.941 20:05:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.941 20:05:07 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:41.941 20:05:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:41.941 20:05:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.941 20:05:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 ************************************ 00:06:42.201 START TEST accel_dif_verify 00:06:42.201 ************************************ 00:06:42.201 20:05:07 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:42.201 [2024-07-15 20:05:07.322753] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:42.201 [2024-07-15 20:05:07.322820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054576 ] 00:06:42.201 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.201 [2024-07-15 20:05:07.404711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.201 [2024-07-15 20:05:07.491879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.201 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:42.202 20:05:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.576 20:05:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.577 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.577 20:05:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.577 20:05:08 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.577 20:05:08 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:43.577 20:05:08 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.577 00:06:43.577 real 0m1.387s 00:06:43.577 user 0m0.006s 00:06:43.577 sys 0m0.002s 00:06:43.577 20:05:08 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.577 20:05:08 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:43.577 ************************************ 00:06:43.577 END TEST accel_dif_verify 00:06:43.577 ************************************ 00:06:43.577 20:05:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.577 20:05:08 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:43.577 20:05:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:43.577 20:05:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.577 20:05:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.577 ************************************ 00:06:43.577 START TEST accel_dif_generate 00:06:43.577 ************************************ 00:06:43.577 20:05:08 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:43.577 20:05:08 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:43.577 [2024-07-15 20:05:08.763272] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:43.577 [2024-07-15 20:05:08.763322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054864 ] 00:06:43.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.577 [2024-07-15 20:05:08.843268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.835 [2024-07-15 20:05:08.930147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:43.835 20:05:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:45.210 20:05:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.210 00:06:45.210 real 0m1.385s 00:06:45.210 user 0m0.008s 00:06:45.210 sys 0m0.000s 00:06:45.210 20:05:10 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.210 20:05:10 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:45.210 ************************************ 00:06:45.210 END TEST accel_dif_generate 00:06:45.210 ************************************ 00:06:45.210 20:05:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.210 20:05:10 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:45.210 20:05:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:45.210 20:05:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.210 20:05:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.210 ************************************ 00:06:45.210 START TEST accel_dif_generate_copy 00:06:45.210 ************************************ 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:45.210 [2024-07-15 20:05:10.200373] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:45.210 [2024-07-15 20:05:10.200428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055141 ] 00:06:45.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.210 [2024-07-15 20:05:10.269413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.210 [2024-07-15 20:05:10.357279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.210 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:45.211 20:05:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.589 00:06:46.589 real 0m1.369s 00:06:46.589 user 0m0.006s 00:06:46.589 sys 0m0.001s 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.589 20:05:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.589 ************************************ 00:06:46.589 END TEST accel_dif_generate_copy 00:06:46.589 ************************************ 00:06:46.589 20:05:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.589 20:05:11 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:46.589 20:05:11 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.589 20:05:11 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:46.589 20:05:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.589 20:05:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.589 ************************************ 00:06:46.589 START TEST accel_comp 00:06:46.589 ************************************ 00:06:46.589 20:05:11 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:46.589 [2024-07-15 20:05:11.632354] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:46.589 [2024-07-15 20:05:11.632404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055426 ] 00:06:46.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.589 [2024-07-15 20:05:11.713490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.589 [2024-07-15 20:05:11.800433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.589 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:46.590 20:05:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:47.974 20:05:12 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.974 00:06:47.974 real 0m1.389s 00:06:47.974 user 0m0.008s 00:06:47.974 sys 0m0.000s 00:06:47.974 20:05:12 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.974 20:05:12 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:47.974 ************************************ 00:06:47.974 END TEST accel_comp 00:06:47.974 ************************************ 00:06:47.974 20:05:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.974 20:05:13 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.974 20:05:13 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:47.974 20:05:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.974 20:05:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.974 ************************************ 00:06:47.974 START TEST accel_decomp 00:06:47.974 ************************************ 00:06:47.974 20:05:13 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:47.974 [2024-07-15 20:05:13.073795] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:47.974 [2024-07-15 20:05:13.073846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055706 ] 00:06:47.974 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.974 [2024-07-15 20:05:13.152467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.974 [2024-07-15 20:05:13.239951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:47.974 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:47.975 20:05:13 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.354 20:05:14 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.354 00:06:49.354 real 0m1.380s 00:06:49.354 user 0m0.006s 00:06:49.354 sys 0m0.002s 00:06:49.354 20:05:14 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.354 20:05:14 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:49.354 ************************************ 00:06:49.354 END TEST accel_decomp 00:06:49.354 ************************************ 00:06:49.354 20:05:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.354 20:05:14 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:49.354 20:05:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:49.354 20:05:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.354 20:05:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.354 ************************************ 00:06:49.355 START TEST accel_decomp_full 00:06:49.355 ************************************ 00:06:49.355 20:05:14 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:49.355 20:05:14 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:49.355 [2024-07-15 20:05:14.518226] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:49.355 [2024-07-15 20:05:14.518451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055991 ] 00:06:49.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.355 [2024-07-15 20:05:14.599446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.355 [2024-07-15 20:05:14.686352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:49.614 20:05:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.553 20:05:15 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.553 00:06:50.553 real 0m1.406s 00:06:50.553 user 0m0.008s 00:06:50.553 sys 0m0.000s 00:06:50.553 20:05:15 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.553 20:05:15 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:50.553 ************************************ 00:06:50.553 END TEST accel_decomp_full 00:06:50.553 ************************************ 00:06:50.812 20:05:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.812 20:05:15 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:50.812 20:05:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:50.812 20:05:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.812 20:05:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.812 ************************************ 00:06:50.812 START TEST accel_decomp_mcore 00:06:50.812 ************************************ 00:06:50.812 20:05:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:50.812 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:50.812 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:50.812 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:50.812 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:50.812 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:50.813 20:05:15 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:50.813 [2024-07-15 20:05:15.974152] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:50.813 [2024-07-15 20:05:15.974200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056269 ] 00:06:50.813 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.813 [2024-07-15 20:05:16.053220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.813 [2024-07-15 20:05:16.144566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.813 [2024-07-15 20:05:16.144668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.813 [2024-07-15 20:05:16.144737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.813 [2024-07-15 20:05:16.144740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.072 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.073 20:05:16 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.011 00:06:52.011 real 0m1.398s 00:06:52.011 user 0m4.631s 00:06:52.011 sys 0m0.139s 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.011 20:05:17 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:52.011 ************************************ 00:06:52.011 END TEST accel_decomp_mcore 00:06:52.011 ************************************ 00:06:52.271 20:05:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.271 20:05:17 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.271 20:05:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:52.271 20:05:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.271 20:05:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.271 ************************************ 00:06:52.271 START TEST accel_decomp_full_mcore 00:06:52.271 ************************************ 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:52.271 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:52.271 [2024-07-15 20:05:17.448834] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:52.271 [2024-07-15 20:05:17.448903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056553 ] 00:06:52.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.272 [2024-07-15 20:05:17.530805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.272 [2024-07-15 20:05:17.622872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.272 [2024-07-15 20:05:17.622972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.272 [2024-07-15 20:05:17.623081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.272 [2024-07-15 20:05:17.623085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.531 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.532 20:05:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.910 00:06:53.910 real 0m1.420s 00:06:53.910 user 0m4.674s 00:06:53.910 sys 0m0.147s 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.910 20:05:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:53.910 ************************************ 00:06:53.910 END TEST accel_decomp_full_mcore 00:06:53.910 ************************************ 00:06:53.910 20:05:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.910 20:05:18 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.910 20:05:18 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:53.910 20:05:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.910 20:05:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.910 ************************************ 00:06:53.910 START TEST accel_decomp_mthread 00:06:53.910 ************************************ 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:53.910 20:05:18 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:53.910 [2024-07-15 20:05:18.932830] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:53.910 [2024-07-15 20:05:18.932885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056839 ] 00:06:53.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.910 [2024-07-15 20:05:19.012403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.910 [2024-07-15 20:05:19.100686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:53.910 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:53.911 20:05:19 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.290 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.291 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.291 20:05:20 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.291 00:06:55.291 real 0m1.399s 00:06:55.291 user 0m1.280s 00:06:55.291 sys 0m0.133s 00:06:55.291 20:05:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.291 20:05:20 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:55.291 ************************************ 00:06:55.291 END TEST accel_decomp_mthread 00:06:55.291 ************************************ 00:06:55.291 20:05:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.291 20:05:20 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.291 20:05:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:55.291 20:05:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.291 20:05:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.291 ************************************ 00:06:55.291 START TEST accel_decomp_full_mthread 00:06:55.291 ************************************ 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:55.291 [2024-07-15 20:05:20.399495] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:55.291 [2024-07-15 20:05:20.399547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057121 ] 00:06:55.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.291 [2024-07-15 20:05:20.480338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.291 [2024-07-15 20:05:20.568466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.291 20:05:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.666 00:06:56.666 real 0m1.435s 00:06:56.666 user 0m1.316s 00:06:56.666 sys 0m0.133s 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.666 20:05:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:56.666 ************************************ 00:06:56.666 END TEST accel_decomp_full_mthread 00:06:56.666 ************************************ 00:06:56.666 20:05:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.666 20:05:21 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:56.666 20:05:21 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:56.666 20:05:21 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:56.666 20:05:21 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:56.666 20:05:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.666 20:05:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.666 20:05:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.666 20:05:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.666 20:05:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.666 20:05:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.666 20:05:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.666 20:05:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:56.666 20:05:21 accel -- accel/accel.sh@41 -- # jq -r . 00:06:56.666 ************************************ 00:06:56.666 START TEST accel_dif_functional_tests 00:06:56.666 ************************************ 00:06:56.666 20:05:21 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:56.666 [2024-07-15 20:05:21.921978] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:56.666 [2024-07-15 20:05:21.922026] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057407 ] 00:06:56.666 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.666 [2024-07-15 20:05:22.002011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.924 [2024-07-15 20:05:22.092092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.924 [2024-07-15 20:05:22.092196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.924 [2024-07-15 20:05:22.092197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.924 00:06:56.924 00:06:56.924 CUnit - A unit testing framework for C - Version 2.1-3 00:06:56.924 http://cunit.sourceforge.net/ 00:06:56.924 00:06:56.924 00:06:56.924 Suite: accel_dif 00:06:56.924 Test: verify: DIF generated, GUARD check ...passed 00:06:56.924 Test: verify: DIF generated, APPTAG check ...passed 00:06:56.924 Test: verify: DIF generated, REFTAG check ...passed 00:06:56.924 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:05:22.167204] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:56.924 passed 00:06:56.924 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:05:22.167268] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:56.924 passed 00:06:56.925 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:05:22.167303] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:56.925 passed 00:06:56.925 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:56.925 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:05:22.167365] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:56.925 passed 00:06:56.925 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:56.925 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:56.925 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:56.925 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:05:22.167508] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:56.925 passed 00:06:56.925 Test: verify copy: DIF generated, GUARD check ...passed 00:06:56.925 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:56.925 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:56.925 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:05:22.167662] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:56.925 passed 00:06:56.925 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:05:22.167692] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:56.925 passed 00:06:56.925 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:05:22.167720] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:56.925 passed 00:06:56.925 Test: generate copy: DIF generated, GUARD check ...passed 00:06:56.925 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:56.925 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:56.925 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:56.925 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:56.925 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:56.925 Test: generate copy: iovecs-len validate ...[2024-07-15 20:05:22.167946] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:56.925 passed 00:06:56.925 Test: generate copy: buffer alignment validate ...passed 00:06:56.925 00:06:56.925 Run Summary: Type Total Ran Passed Failed Inactive 00:06:56.925 suites 1 1 n/a 0 0 00:06:56.925 tests 26 26 26 0 0 00:06:56.925 asserts 115 115 115 0 n/a 00:06:56.925 00:06:56.925 Elapsed time = 0.002 seconds 00:06:57.182 00:06:57.182 real 0m0.475s 00:06:57.183 user 0m0.692s 00:06:57.183 sys 0m0.163s 00:06:57.183 20:05:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.183 20:05:22 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:57.183 ************************************ 00:06:57.183 END TEST accel_dif_functional_tests 00:06:57.183 ************************************ 00:06:57.183 20:05:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.183 00:06:57.183 real 0m32.485s 00:06:57.183 user 0m36.173s 00:06:57.183 sys 0m4.675s 00:06:57.183 20:05:22 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.183 20:05:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.183 ************************************ 00:06:57.183 END TEST accel 00:06:57.183 ************************************ 00:06:57.183 20:05:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.183 20:05:22 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:57.183 20:05:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.183 20:05:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.183 20:05:22 -- common/autotest_common.sh@10 -- # set +x 00:06:57.183 ************************************ 00:06:57.183 START TEST accel_rpc 00:06:57.183 ************************************ 00:06:57.183 20:05:22 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:57.183 * Looking for test storage... 00:06:57.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:57.441 20:05:22 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:57.441 20:05:22 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:57.441 20:05:22 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4057476 00:06:57.441 20:05:22 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4057476 00:06:57.441 20:05:22 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4057476 ']' 00:06:57.441 20:05:22 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.441 20:05:22 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.441 20:05:22 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.441 20:05:22 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.441 20:05:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.441 [2024-07-15 20:05:22.585567] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:57.441 [2024-07-15 20:05:22.585628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057476 ] 00:06:57.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.441 [2024-07-15 20:05:22.666999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.441 [2024-07-15 20:05:22.756609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.700 20:05:22 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.700 20:05:22 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.700 20:05:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:57.700 20:05:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:57.700 20:05:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:57.700 20:05:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:57.700 20:05:22 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:57.700 20:05:22 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.700 20:05:22 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.700 20:05:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.700 ************************************ 00:06:57.700 START TEST accel_assign_opcode 00:06:57.700 ************************************ 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:57.700 [2024-07-15 20:05:22.837176] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:57.700 [2024-07-15 20:05:22.845192] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.700 20:05:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:57.700 20:05:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.700 20:05:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:57.700 20:05:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:57.700 20:05:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.700 20:05:23 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:57.700 20:05:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 20:05:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.960 software 00:06:57.960 00:06:57.960 real 0m0.251s 00:06:57.960 user 0m0.049s 00:06:57.960 sys 0m0.009s 00:06:57.960 20:05:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.960 20:05:23 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:57.960 ************************************ 00:06:57.960 END TEST accel_assign_opcode 00:06:57.960 ************************************ 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:57.960 20:05:23 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4057476 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4057476 ']' 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4057476 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4057476 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4057476' 00:06:57.960 killing process with pid 4057476 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@967 -- # kill 4057476 00:06:57.960 20:05:23 accel_rpc -- common/autotest_common.sh@972 -- # wait 4057476 00:06:58.218 00:06:58.219 real 0m1.047s 00:06:58.219 user 0m1.024s 00:06:58.219 sys 0m0.442s 00:06:58.219 20:05:23 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.219 20:05:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.219 ************************************ 00:06:58.219 END TEST accel_rpc 00:06:58.219 ************************************ 00:06:58.219 20:05:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.219 20:05:23 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:58.219 20:05:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.219 20:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.219 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:06:58.219 ************************************ 00:06:58.219 START TEST app_cmdline 00:06:58.219 ************************************ 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:58.477 * Looking for test storage... 00:06:58.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.477 20:05:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.477 20:05:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4057815 00:06:58.477 20:05:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4057815 00:06:58.477 20:05:23 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4057815 ']' 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.477 20:05:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.477 [2024-07-15 20:05:23.708018] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:06:58.477 [2024-07-15 20:05:23.708075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057815 ] 00:06:58.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.477 [2024-07-15 20:05:23.788222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.736 [2024-07-15 20:05:23.882081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.304 20:05:24 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.304 20:05:24 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:59.304 20:05:24 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:59.563 { 00:06:59.563 "version": "SPDK v24.09-pre git sha1 24018edd4", 00:06:59.563 "fields": { 00:06:59.563 "major": 24, 00:06:59.563 "minor": 9, 00:06:59.563 "patch": 0, 00:06:59.563 "suffix": "-pre", 00:06:59.563 "commit": "24018edd4" 00:06:59.563 } 00:06:59.563 } 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.563 20:05:24 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.563 20:05:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.563 20:05:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.563 20:05:24 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.822 20:05:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.822 20:05:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.822 20:05:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.822 20:05:24 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:59.822 20:05:24 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.822 20:05:24 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:59.823 20:05:24 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.823 request: 00:06:59.823 { 00:06:59.823 "method": "env_dpdk_get_mem_stats", 00:06:59.823 "req_id": 1 00:06:59.823 } 00:06:59.823 Got JSON-RPC error response 00:06:59.823 response: 00:06:59.823 { 00:06:59.823 "code": -32601, 00:06:59.823 "message": "Method not found" 00:06:59.823 } 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.082 20:05:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4057815 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4057815 ']' 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4057815 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4057815 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4057815' 00:07:00.082 killing process with pid 4057815 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@967 -- # kill 4057815 00:07:00.082 20:05:25 app_cmdline -- common/autotest_common.sh@972 -- # wait 4057815 00:07:00.341 00:07:00.341 real 0m1.984s 00:07:00.341 user 0m2.562s 00:07:00.341 sys 0m0.457s 00:07:00.341 20:05:25 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.341 20:05:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.341 ************************************ 00:07:00.341 END TEST app_cmdline 00:07:00.341 ************************************ 00:07:00.341 20:05:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.341 20:05:25 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:00.341 20:05:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.341 20:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.341 20:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.341 ************************************ 00:07:00.341 START TEST version 00:07:00.341 ************************************ 00:07:00.341 20:05:25 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:00.601 * Looking for test storage... 00:07:00.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.601 20:05:25 version -- app/version.sh@17 -- # get_header_version major 00:07:00.601 20:05:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.601 20:05:25 version -- app/version.sh@14 -- # cut -f2 00:07:00.601 20:05:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.601 20:05:25 version -- app/version.sh@17 -- # major=24 00:07:00.601 20:05:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.601 20:05:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.601 20:05:25 version -- app/version.sh@14 -- # cut -f2 00:07:00.601 20:05:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.602 20:05:25 version -- app/version.sh@18 -- # minor=9 00:07:00.602 20:05:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.602 20:05:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.602 20:05:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.602 20:05:25 version -- app/version.sh@14 -- # cut -f2 00:07:00.602 20:05:25 version -- app/version.sh@19 -- # patch=0 00:07:00.602 20:05:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.602 20:05:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.602 20:05:25 version -- app/version.sh@14 -- # cut -f2 00:07:00.602 20:05:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.602 20:05:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.602 20:05:25 version -- app/version.sh@22 -- # version=24.9 00:07:00.602 20:05:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.602 20:05:25 version -- app/version.sh@28 -- # version=24.9rc0 00:07:00.602 20:05:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:00.602 20:05:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.602 20:05:25 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:00.602 20:05:25 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:00.602 00:07:00.602 real 0m0.161s 00:07:00.602 user 0m0.079s 00:07:00.602 sys 0m0.116s 00:07:00.602 20:05:25 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.602 20:05:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.602 ************************************ 00:07:00.602 END TEST version 00:07:00.602 ************************************ 00:07:00.602 20:05:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:00.602 20:05:25 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@198 -- # uname -s 00:07:00.602 20:05:25 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:00.602 20:05:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:00.602 20:05:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:00.602 20:05:25 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:00.602 20:05:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.602 20:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.602 20:05:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:00.602 20:05:25 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:00.602 20:05:25 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:00.602 20:05:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.602 20:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.602 20:05:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.602 ************************************ 00:07:00.602 START TEST nvmf_tcp 00:07:00.602 ************************************ 00:07:00.602 20:05:25 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:00.873 * Looking for test storage... 00:07:00.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.873 20:05:25 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.873 20:05:25 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.873 20:05:25 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.873 20:05:25 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.873 20:05:25 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.873 20:05:25 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.873 20:05:25 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:00.873 20:05:25 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:00.873 20:05:25 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.873 20:05:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.873 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:00.873 20:05:26 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:00.873 20:05:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:00.873 20:05:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.873 20:05:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.873 ************************************ 00:07:00.873 START TEST nvmf_example 00:07:00.873 ************************************ 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:00.873 * Looking for test storage... 00:07:00.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.873 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:00.874 20:05:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:06.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:06.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:06.149 Found net devices under 0000:af:00.0: cvl_0_0 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:06.149 Found net devices under 0000:af:00.1: cvl_0_1 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.149 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:07:06.409 00:07:06.409 --- 10.0.0.2 ping statistics --- 00:07:06.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.409 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:06.409 00:07:06.409 --- 10.0.0.1 ping statistics --- 00:07:06.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.409 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4061553 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4061553 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 4061553 ']' 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.409 20:05:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:07.635 20:05:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:07.635 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.864 Initializing NVMe Controllers 00:07:19.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.864 Initialization complete. Launching workers. 00:07:19.864 ======================================================== 00:07:19.864 Latency(us) 00:07:19.864 Device Information : IOPS MiB/s Average min max 00:07:19.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15489.04 60.50 4133.21 881.87 20029.15 00:07:19.864 ======================================================== 00:07:19.864 Total : 15489.04 60.50 4133.21 881.87 20029.15 00:07:19.864 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:19.864 rmmod nvme_tcp 00:07:19.864 rmmod nvme_fabrics 00:07:19.864 rmmod nvme_keyring 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4061553 ']' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4061553 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 4061553 ']' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 4061553 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4061553 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4061553' 00:07:19.864 killing process with pid 4061553 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 4061553 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 4061553 00:07:19.864 nvmf threads initialize successfully 00:07:19.864 bdev subsystem init successfully 00:07:19.864 created a nvmf target service 00:07:19.864 create targets's poll groups done 00:07:19.864 all subsystems of target started 00:07:19.864 nvmf target is running 00:07:19.864 all subsystems of target stopped 00:07:19.864 destroy targets's poll groups done 00:07:19.864 destroyed the nvmf target service 00:07:19.864 bdev subsystem finish successfully 00:07:19.864 nvmf threads destroy successfully 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.864 20:05:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.434 20:05:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:20.434 20:05:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:20.434 20:05:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.434 20:05:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 00:07:20.434 real 0m19.541s 00:07:20.434 user 0m46.908s 00:07:20.434 sys 0m5.572s 00:07:20.434 20:05:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.434 20:05:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 ************************************ 00:07:20.434 END TEST nvmf_example 00:07:20.434 ************************************ 00:07:20.434 20:05:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:20.434 20:05:45 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:20.434 20:05:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:20.434 20:05:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.434 20:05:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 ************************************ 00:07:20.434 START TEST nvmf_filesystem 00:07:20.434 ************************************ 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:20.434 * Looking for test storage... 00:07:20.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:20.434 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:20.435 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:20.435 #define SPDK_CONFIG_H 00:07:20.435 #define SPDK_CONFIG_APPS 1 00:07:20.435 #define SPDK_CONFIG_ARCH native 00:07:20.435 #undef SPDK_CONFIG_ASAN 00:07:20.435 #undef SPDK_CONFIG_AVAHI 00:07:20.435 #undef SPDK_CONFIG_CET 00:07:20.435 #define SPDK_CONFIG_COVERAGE 1 00:07:20.435 #define SPDK_CONFIG_CROSS_PREFIX 00:07:20.435 #undef SPDK_CONFIG_CRYPTO 00:07:20.435 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:20.435 #undef SPDK_CONFIG_CUSTOMOCF 00:07:20.435 #undef SPDK_CONFIG_DAOS 00:07:20.435 #define SPDK_CONFIG_DAOS_DIR 00:07:20.435 #define SPDK_CONFIG_DEBUG 1 00:07:20.435 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:20.435 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:20.435 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:20.435 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:20.435 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:20.435 #undef SPDK_CONFIG_DPDK_UADK 00:07:20.435 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:20.435 #define SPDK_CONFIG_EXAMPLES 1 00:07:20.435 #undef SPDK_CONFIG_FC 00:07:20.435 #define SPDK_CONFIG_FC_PATH 00:07:20.435 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:20.435 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:20.435 #undef SPDK_CONFIG_FUSE 00:07:20.435 #undef SPDK_CONFIG_FUZZER 00:07:20.435 #define SPDK_CONFIG_FUZZER_LIB 00:07:20.435 #undef SPDK_CONFIG_GOLANG 00:07:20.435 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:20.435 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:20.435 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:20.435 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:20.435 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:20.435 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:20.435 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:20.435 #define SPDK_CONFIG_IDXD 1 00:07:20.435 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:20.435 #undef SPDK_CONFIG_IPSEC_MB 00:07:20.435 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:20.435 #define SPDK_CONFIG_ISAL 1 00:07:20.435 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:20.435 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:20.435 #define SPDK_CONFIG_LIBDIR 00:07:20.435 #undef SPDK_CONFIG_LTO 00:07:20.435 #define SPDK_CONFIG_MAX_LCORES 128 00:07:20.435 #define SPDK_CONFIG_NVME_CUSE 1 00:07:20.435 #undef SPDK_CONFIG_OCF 00:07:20.435 #define SPDK_CONFIG_OCF_PATH 00:07:20.435 #define SPDK_CONFIG_OPENSSL_PATH 00:07:20.435 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:20.435 #define SPDK_CONFIG_PGO_DIR 00:07:20.435 #undef SPDK_CONFIG_PGO_USE 00:07:20.435 #define SPDK_CONFIG_PREFIX /usr/local 00:07:20.435 #undef SPDK_CONFIG_RAID5F 00:07:20.435 #undef SPDK_CONFIG_RBD 00:07:20.435 #define SPDK_CONFIG_RDMA 1 00:07:20.435 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:20.435 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:20.435 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:20.435 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:20.436 #define SPDK_CONFIG_SHARED 1 00:07:20.436 #undef SPDK_CONFIG_SMA 00:07:20.436 #define SPDK_CONFIG_TESTS 1 00:07:20.436 #undef SPDK_CONFIG_TSAN 00:07:20.436 #define SPDK_CONFIG_UBLK 1 00:07:20.436 #define SPDK_CONFIG_UBSAN 1 00:07:20.436 #undef SPDK_CONFIG_UNIT_TESTS 00:07:20.436 #undef SPDK_CONFIG_URING 00:07:20.436 #define SPDK_CONFIG_URING_PATH 00:07:20.436 #undef SPDK_CONFIG_URING_ZNS 00:07:20.436 #undef SPDK_CONFIG_USDT 00:07:20.436 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:20.436 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:20.436 #define SPDK_CONFIG_VFIO_USER 1 00:07:20.436 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:20.436 #define SPDK_CONFIG_VHOST 1 00:07:20.436 #define SPDK_CONFIG_VIRTIO 1 00:07:20.436 #undef SPDK_CONFIG_VTUNE 00:07:20.436 #define SPDK_CONFIG_VTUNE_DIR 00:07:20.436 #define SPDK_CONFIG_WERROR 1 00:07:20.436 #define SPDK_CONFIG_WPDK_DIR 00:07:20.436 #undef SPDK_CONFIG_XNVME 00:07:20.436 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:20.436 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:20.697 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 4064107 ]] 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 4064107 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.nKdrb1 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.nKdrb1/tests/target /tmp/spdk.nKdrb1 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954339328 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330090496 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=83811454976 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94501482496 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10690027520 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47195103232 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250739200 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18890862592 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18900299776 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9437184 00:07:20.698 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249842176 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250743296 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=901120 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450143744 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450147840 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:20.699 * Looking for test storage... 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=83811454976 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12904620032 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.699 20:05:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.068 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:26.069 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:26.069 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:26.069 Found net devices under 0000:af:00.0: cvl_0_0 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:26.069 Found net devices under 0000:af:00.1: cvl_0_1 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.069 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:26.329 00:07:26.329 --- 10.0.0.2 ping statistics --- 00:07:26.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.329 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:07:26.329 00:07:26.329 --- 10.0.0.1 ping statistics --- 00:07:26.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.329 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:26.329 ************************************ 00:07:26.329 START TEST nvmf_filesystem_no_in_capsule 00:07:26.329 ************************************ 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4067266 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4067266 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4067266 ']' 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.329 20:05:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.329 [2024-07-15 20:05:51.569079] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:07:26.329 [2024-07-15 20:05:51.569129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.329 [2024-07-15 20:05:51.653295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.588 [2024-07-15 20:05:51.748074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.588 [2024-07-15 20:05:51.748121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.588 [2024-07-15 20:05:51.748131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.588 [2024-07-15 20:05:51.748140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.588 [2024-07-15 20:05:51.748148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.588 [2024-07-15 20:05:51.748202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.588 [2024-07-15 20:05:51.748292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.588 [2024-07-15 20:05:51.748346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.588 [2024-07-15 20:05:51.748349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.171 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.171 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:27.171 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.171 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.171 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 [2024-07-15 20:05:52.562920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 [2024-07-15 20:05:52.721718] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.430 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:27.430 { 00:07:27.430 "name": "Malloc1", 00:07:27.430 "aliases": [ 00:07:27.430 "df911389-326b-4d37-9830-63d2a02130c9" 00:07:27.430 ], 00:07:27.430 "product_name": "Malloc disk", 00:07:27.430 "block_size": 512, 00:07:27.430 "num_blocks": 1048576, 00:07:27.430 "uuid": "df911389-326b-4d37-9830-63d2a02130c9", 00:07:27.430 "assigned_rate_limits": { 00:07:27.430 "rw_ios_per_sec": 0, 00:07:27.430 "rw_mbytes_per_sec": 0, 00:07:27.430 "r_mbytes_per_sec": 0, 00:07:27.430 "w_mbytes_per_sec": 0 00:07:27.430 }, 00:07:27.430 "claimed": true, 00:07:27.430 "claim_type": "exclusive_write", 00:07:27.430 "zoned": false, 00:07:27.430 "supported_io_types": { 00:07:27.430 "read": true, 00:07:27.430 "write": true, 00:07:27.430 "unmap": true, 00:07:27.430 "flush": true, 00:07:27.430 "reset": true, 00:07:27.430 "nvme_admin": false, 00:07:27.430 "nvme_io": false, 00:07:27.430 "nvme_io_md": false, 00:07:27.430 "write_zeroes": true, 00:07:27.430 "zcopy": true, 00:07:27.430 "get_zone_info": false, 00:07:27.430 "zone_management": false, 00:07:27.430 "zone_append": false, 00:07:27.430 "compare": false, 00:07:27.430 "compare_and_write": false, 00:07:27.430 "abort": true, 00:07:27.430 "seek_hole": false, 00:07:27.430 "seek_data": false, 00:07:27.430 "copy": true, 00:07:27.430 "nvme_iov_md": false 00:07:27.430 }, 00:07:27.430 "memory_domains": [ 00:07:27.430 { 00:07:27.430 "dma_device_id": "system", 00:07:27.430 "dma_device_type": 1 00:07:27.430 }, 00:07:27.430 { 00:07:27.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.431 "dma_device_type": 2 00:07:27.431 } 00:07:27.431 ], 00:07:27.431 "driver_specific": {} 00:07:27.431 } 00:07:27.431 ]' 00:07:27.431 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:27.690 20:05:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.070 20:05:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.070 20:05:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.070 20:05:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.070 20:05:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.070 20:05:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:30.978 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:31.237 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:31.237 20:05:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.614 ************************************ 00:07:32.614 START TEST filesystem_ext4 00:07:32.614 ************************************ 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:32.614 20:05:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:32.614 mke2fs 1.46.5 (30-Dec-2021) 00:07:32.614 Discarding device blocks: 0/522240 done 00:07:32.614 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:32.614 Filesystem UUID: edf33106-9208-40fc-84e9-67f48de9eb2b 00:07:32.614 Superblock backups stored on blocks: 00:07:32.614 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:32.614 00:07:32.614 Allocating group tables: 0/64 done 00:07:32.615 Writing inode tables: 0/64 done 00:07:32.615 Creating journal (8192 blocks): done 00:07:33.441 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:33.441 00:07:33.441 20:05:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:33.441 20:05:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4067266 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.376 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.376 00:07:34.376 real 0m1.957s 00:07:34.376 user 0m0.024s 00:07:34.377 sys 0m0.069s 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:34.377 ************************************ 00:07:34.377 END TEST filesystem_ext4 00:07:34.377 ************************************ 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.377 ************************************ 00:07:34.377 START TEST filesystem_btrfs 00:07:34.377 ************************************ 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:34.377 20:05:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:34.944 btrfs-progs v6.6.2 00:07:34.944 See https://btrfs.readthedocs.io for more information. 00:07:34.944 00:07:34.944 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:34.944 NOTE: several default settings have changed in version 5.15, please make sure 00:07:34.944 this does not affect your deployments: 00:07:34.944 - DUP for metadata (-m dup) 00:07:34.944 - enabled no-holes (-O no-holes) 00:07:34.944 - enabled free-space-tree (-R free-space-tree) 00:07:34.944 00:07:34.944 Label: (null) 00:07:34.944 UUID: fa24724f-2b9f-4e54-a724-09ea9ab68a44 00:07:34.944 Node size: 16384 00:07:34.944 Sector size: 4096 00:07:34.944 Filesystem size: 510.00MiB 00:07:34.944 Block group profiles: 00:07:34.944 Data: single 8.00MiB 00:07:34.944 Metadata: DUP 32.00MiB 00:07:34.944 System: DUP 8.00MiB 00:07:34.944 SSD detected: yes 00:07:34.944 Zoned device: no 00:07:34.944 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:34.944 Runtime features: free-space-tree 00:07:34.944 Checksum: crc32c 00:07:34.944 Number of devices: 1 00:07:34.944 Devices: 00:07:34.944 ID SIZE PATH 00:07:34.944 1 510.00MiB /dev/nvme0n1p1 00:07:34.944 00:07:34.944 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:34.944 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4067266 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.880 00:07:35.880 real 0m1.364s 00:07:35.880 user 0m0.037s 00:07:35.880 sys 0m0.116s 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.880 20:06:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:35.880 ************************************ 00:07:35.880 END TEST filesystem_btrfs 00:07:35.880 ************************************ 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.880 ************************************ 00:07:35.880 START TEST filesystem_xfs 00:07:35.880 ************************************ 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:35.880 20:06:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:35.880 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:35.880 = sectsz=512 attr=2, projid32bit=1 00:07:35.880 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:35.880 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:35.880 data = bsize=4096 blocks=130560, imaxpct=25 00:07:35.880 = sunit=0 swidth=0 blks 00:07:35.880 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:35.880 log =internal log bsize=4096 blocks=16384, version=2 00:07:35.880 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:35.880 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:36.816 Discarding blocks...Done. 00:07:36.816 20:06:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:36.816 20:06:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4067266 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.348 00:07:39.348 real 0m3.230s 00:07:39.348 user 0m0.022s 00:07:39.348 sys 0m0.072s 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.348 ************************************ 00:07:39.348 END TEST filesystem_xfs 00:07:39.348 ************************************ 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:39.348 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:39.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4067266 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4067266 ']' 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4067266 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4067266 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4067266' 00:07:39.607 killing process with pid 4067266 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 4067266 00:07:39.607 20:06:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 4067266 00:07:39.865 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:39.865 00:07:39.865 real 0m13.700s 00:07:39.865 user 0m53.737s 00:07:39.865 sys 0m1.348s 00:07:39.865 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.865 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.865 ************************************ 00:07:39.865 END TEST nvmf_filesystem_no_in_capsule 00:07:39.865 ************************************ 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.124 ************************************ 00:07:40.124 START TEST nvmf_filesystem_in_capsule 00:07:40.124 ************************************ 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4070162 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4070162 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4070162 ']' 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.124 20:06:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.124 [2024-07-15 20:06:05.344990] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:07:40.124 [2024-07-15 20:06:05.345053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.124 [2024-07-15 20:06:05.432957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.382 [2024-07-15 20:06:05.523342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.382 [2024-07-15 20:06:05.523387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.382 [2024-07-15 20:06:05.523397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.382 [2024-07-15 20:06:05.523406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.382 [2024-07-15 20:06:05.523413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.382 [2024-07-15 20:06:05.523460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.382 [2024-07-15 20:06:05.523564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.382 [2024-07-15 20:06:05.523673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.382 [2024-07-15 20:06:05.523676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.947 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.947 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:40.948 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 [2024-07-15 20:06:06.340209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 [2024-07-15 20:06:06.493847] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.206 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:41.206 { 00:07:41.206 "name": "Malloc1", 00:07:41.206 "aliases": [ 00:07:41.206 "f8ebb9c2-7617-4829-96e7-8b3f66a6b7fc" 00:07:41.206 ], 00:07:41.206 "product_name": "Malloc disk", 00:07:41.206 "block_size": 512, 00:07:41.206 "num_blocks": 1048576, 00:07:41.206 "uuid": "f8ebb9c2-7617-4829-96e7-8b3f66a6b7fc", 00:07:41.206 "assigned_rate_limits": { 00:07:41.206 "rw_ios_per_sec": 0, 00:07:41.206 "rw_mbytes_per_sec": 0, 00:07:41.206 "r_mbytes_per_sec": 0, 00:07:41.206 "w_mbytes_per_sec": 0 00:07:41.206 }, 00:07:41.206 "claimed": true, 00:07:41.206 "claim_type": "exclusive_write", 00:07:41.207 "zoned": false, 00:07:41.207 "supported_io_types": { 00:07:41.207 "read": true, 00:07:41.207 "write": true, 00:07:41.207 "unmap": true, 00:07:41.207 "flush": true, 00:07:41.207 "reset": true, 00:07:41.207 "nvme_admin": false, 00:07:41.207 "nvme_io": false, 00:07:41.207 "nvme_io_md": false, 00:07:41.207 "write_zeroes": true, 00:07:41.207 "zcopy": true, 00:07:41.207 "get_zone_info": false, 00:07:41.207 "zone_management": false, 00:07:41.207 "zone_append": false, 00:07:41.207 "compare": false, 00:07:41.207 "compare_and_write": false, 00:07:41.207 "abort": true, 00:07:41.207 "seek_hole": false, 00:07:41.207 "seek_data": false, 00:07:41.207 "copy": true, 00:07:41.207 "nvme_iov_md": false 00:07:41.207 }, 00:07:41.207 "memory_domains": [ 00:07:41.207 { 00:07:41.207 "dma_device_id": "system", 00:07:41.207 "dma_device_type": 1 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.207 "dma_device_type": 2 00:07:41.207 } 00:07:41.207 ], 00:07:41.207 "driver_specific": {} 00:07:41.207 } 00:07:41.207 ]' 00:07:41.207 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:41.464 20:06:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:42.836 20:06:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:42.836 20:06:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:42.836 20:06:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:42.836 20:06:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:42.836 20:06:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:44.741 20:06:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:45.000 20:06:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:45.568 20:06:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.505 ************************************ 00:07:46.505 START TEST filesystem_in_capsule_ext4 00:07:46.505 ************************************ 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:46.505 20:06:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:46.505 mke2fs 1.46.5 (30-Dec-2021) 00:07:46.505 Discarding device blocks: 0/522240 done 00:07:46.505 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:46.505 Filesystem UUID: 26cf95cd-af19-4a2a-91cd-62c1e6e11265 00:07:46.505 Superblock backups stored on blocks: 00:07:46.505 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:46.505 00:07:46.505 Allocating group tables: 0/64 done 00:07:46.505 Writing inode tables: 0/64 done 00:07:46.764 Creating journal (8192 blocks): done 00:07:47.700 Writing superblocks and filesystem accounting information: 0/64 done 00:07:47.700 00:07:47.700 20:06:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:47.700 20:06:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4070162 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:48.634 00:07:48.634 real 0m2.138s 00:07:48.634 user 0m0.026s 00:07:48.634 sys 0m0.065s 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:48.634 ************************************ 00:07:48.634 END TEST filesystem_in_capsule_ext4 00:07:48.634 ************************************ 00:07:48.634 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.635 ************************************ 00:07:48.635 START TEST filesystem_in_capsule_btrfs 00:07:48.635 ************************************ 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:48.635 20:06:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:48.892 btrfs-progs v6.6.2 00:07:48.893 See https://btrfs.readthedocs.io for more information. 00:07:48.893 00:07:48.893 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:48.893 NOTE: several default settings have changed in version 5.15, please make sure 00:07:48.893 this does not affect your deployments: 00:07:48.893 - DUP for metadata (-m dup) 00:07:48.893 - enabled no-holes (-O no-holes) 00:07:48.893 - enabled free-space-tree (-R free-space-tree) 00:07:48.893 00:07:48.893 Label: (null) 00:07:48.893 UUID: 2d7f872f-2f45-45d9-b71a-0af1b6f0ec6c 00:07:48.893 Node size: 16384 00:07:48.893 Sector size: 4096 00:07:48.893 Filesystem size: 510.00MiB 00:07:48.893 Block group profiles: 00:07:48.893 Data: single 8.00MiB 00:07:48.893 Metadata: DUP 32.00MiB 00:07:48.893 System: DUP 8.00MiB 00:07:48.893 SSD detected: yes 00:07:48.893 Zoned device: no 00:07:48.893 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:48.893 Runtime features: free-space-tree 00:07:48.893 Checksum: crc32c 00:07:48.893 Number of devices: 1 00:07:48.893 Devices: 00:07:48.893 ID SIZE PATH 00:07:48.893 1 510.00MiB /dev/nvme0n1p1 00:07:48.893 00:07:48.893 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:48.893 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4070162 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.152 00:07:49.152 real 0m0.437s 00:07:49.152 user 0m0.023s 00:07:49.152 sys 0m0.128s 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:49.152 ************************************ 00:07:49.152 END TEST filesystem_in_capsule_btrfs 00:07:49.152 ************************************ 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.152 ************************************ 00:07:49.152 START TEST filesystem_in_capsule_xfs 00:07:49.152 ************************************ 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:49.152 20:06:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:49.152 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:49.152 = sectsz=512 attr=2, projid32bit=1 00:07:49.152 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:49.152 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:49.152 data = bsize=4096 blocks=130560, imaxpct=25 00:07:49.152 = sunit=0 swidth=0 blks 00:07:49.152 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:49.152 log =internal log bsize=4096 blocks=16384, version=2 00:07:49.152 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:49.152 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:50.090 Discarding blocks...Done. 00:07:50.090 20:06:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:50.090 20:06:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4070162 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.627 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.627 00:07:52.627 real 0m3.264s 00:07:52.627 user 0m0.022s 00:07:52.627 sys 0m0.073s 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:52.628 ************************************ 00:07:52.628 END TEST filesystem_in_capsule_xfs 00:07:52.628 ************************************ 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:52.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4070162 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4070162 ']' 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4070162 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4070162 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4070162' 00:07:52.628 killing process with pid 4070162 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 4070162 00:07:52.628 20:06:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 4070162 00:07:53.196 20:06:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:53.196 00:07:53.196 real 0m13.044s 00:07:53.196 user 0m51.136s 00:07:53.196 sys 0m1.327s 00:07:53.196 20:06:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.196 20:06:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.196 ************************************ 00:07:53.196 END TEST nvmf_filesystem_in_capsule 00:07:53.196 ************************************ 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.197 rmmod nvme_tcp 00:07:53.197 rmmod nvme_fabrics 00:07:53.197 rmmod nvme_keyring 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.197 20:06:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.733 20:06:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.733 00:07:55.733 real 0m34.858s 00:07:55.733 user 1m46.578s 00:07:55.733 sys 0m7.074s 00:07:55.733 20:06:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.733 20:06:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.733 ************************************ 00:07:55.733 END TEST nvmf_filesystem 00:07:55.734 ************************************ 00:07:55.734 20:06:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:55.734 20:06:20 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:55.734 20:06:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.734 20:06:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.734 20:06:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.734 ************************************ 00:07:55.734 START TEST nvmf_target_discovery 00:07:55.734 ************************************ 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:55.734 * Looking for test storage... 00:07:55.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.734 20:06:20 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:01.012 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:01.012 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:01.012 Found net devices under 0000:af:00.0: cvl_0_0 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:01.012 Found net devices under 0000:af:00.1: cvl_0_1 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.012 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.013 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:08:01.272 00:08:01.272 --- 10.0.0.2 ping statistics --- 00:08:01.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.272 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:08:01.272 00:08:01.272 --- 10.0.0.1 ping statistics --- 00:08:01.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.272 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4076658 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4076658 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 4076658 ']' 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.272 20:06:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.272 [2024-07-15 20:06:26.481102] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:08:01.272 [2024-07-15 20:06:26.481155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.272 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.272 [2024-07-15 20:06:26.567519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.531 [2024-07-15 20:06:26.658919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.531 [2024-07-15 20:06:26.658963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.531 [2024-07-15 20:06:26.658973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.531 [2024-07-15 20:06:26.658982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.531 [2024-07-15 20:06:26.658990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.531 [2024-07-15 20:06:26.659038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.531 [2024-07-15 20:06:26.659140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.531 [2024-07-15 20:06:26.659232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.531 [2024-07-15 20:06:26.659234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.100 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.100 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:02.100 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.100 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.100 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 [2024-07-15 20:06:27.471296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 Null1 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 [2024-07-15 20:06:27.519548] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.360 Null2 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:02.360 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 Null3 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 Null4 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.361 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:02.654 00:08:02.654 Discovery Log Number of Records 6, Generation counter 6 00:08:02.654 =====Discovery Log Entry 0====== 00:08:02.654 trtype: tcp 00:08:02.654 adrfam: ipv4 00:08:02.654 subtype: current discovery subsystem 00:08:02.654 treq: not required 00:08:02.654 portid: 0 00:08:02.654 trsvcid: 4420 00:08:02.654 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.654 traddr: 10.0.0.2 00:08:02.654 eflags: explicit discovery connections, duplicate discovery information 00:08:02.654 sectype: none 00:08:02.654 =====Discovery Log Entry 1====== 00:08:02.654 trtype: tcp 00:08:02.654 adrfam: ipv4 00:08:02.654 subtype: nvme subsystem 00:08:02.654 treq: not required 00:08:02.654 portid: 0 00:08:02.654 trsvcid: 4420 00:08:02.654 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:02.654 traddr: 10.0.0.2 00:08:02.654 eflags: none 00:08:02.654 sectype: none 00:08:02.654 =====Discovery Log Entry 2====== 00:08:02.654 trtype: tcp 00:08:02.654 adrfam: ipv4 00:08:02.654 subtype: nvme subsystem 00:08:02.654 treq: not required 00:08:02.654 portid: 0 00:08:02.654 trsvcid: 4420 00:08:02.654 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:02.654 traddr: 10.0.0.2 00:08:02.654 eflags: none 00:08:02.654 sectype: none 00:08:02.654 =====Discovery Log Entry 3====== 00:08:02.654 trtype: tcp 00:08:02.654 adrfam: ipv4 00:08:02.654 subtype: nvme subsystem 00:08:02.654 treq: not required 00:08:02.654 portid: 0 00:08:02.654 trsvcid: 4420 00:08:02.654 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:02.654 traddr: 10.0.0.2 00:08:02.654 eflags: none 00:08:02.654 sectype: none 00:08:02.654 =====Discovery Log Entry 4====== 00:08:02.654 trtype: tcp 00:08:02.654 adrfam: ipv4 00:08:02.654 subtype: nvme subsystem 00:08:02.654 treq: not required 00:08:02.654 portid: 0 00:08:02.654 trsvcid: 4420 00:08:02.654 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:02.654 traddr: 10.0.0.2 00:08:02.654 eflags: none 00:08:02.654 sectype: none 00:08:02.654 =====Discovery Log Entry 5====== 00:08:02.654 trtype: tcp 00:08:02.654 adrfam: ipv4 00:08:02.654 subtype: discovery subsystem referral 00:08:02.654 treq: not required 00:08:02.654 portid: 0 00:08:02.654 trsvcid: 4430 00:08:02.654 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:02.654 traddr: 10.0.0.2 00:08:02.654 eflags: none 00:08:02.654 sectype: none 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:02.654 Perform nvmf subsystem discovery via RPC 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.654 [ 00:08:02.654 { 00:08:02.654 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:02.654 "subtype": "Discovery", 00:08:02.654 "listen_addresses": [ 00:08:02.654 { 00:08:02.654 "trtype": "TCP", 00:08:02.654 "adrfam": "IPv4", 00:08:02.654 "traddr": "10.0.0.2", 00:08:02.654 "trsvcid": "4420" 00:08:02.654 } 00:08:02.654 ], 00:08:02.654 "allow_any_host": true, 00:08:02.654 "hosts": [] 00:08:02.654 }, 00:08:02.654 { 00:08:02.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:02.654 "subtype": "NVMe", 00:08:02.654 "listen_addresses": [ 00:08:02.654 { 00:08:02.654 "trtype": "TCP", 00:08:02.654 "adrfam": "IPv4", 00:08:02.654 "traddr": "10.0.0.2", 00:08:02.654 "trsvcid": "4420" 00:08:02.654 } 00:08:02.654 ], 00:08:02.654 "allow_any_host": true, 00:08:02.654 "hosts": [], 00:08:02.654 "serial_number": "SPDK00000000000001", 00:08:02.654 "model_number": "SPDK bdev Controller", 00:08:02.654 "max_namespaces": 32, 00:08:02.654 "min_cntlid": 1, 00:08:02.654 "max_cntlid": 65519, 00:08:02.654 "namespaces": [ 00:08:02.654 { 00:08:02.654 "nsid": 1, 00:08:02.654 "bdev_name": "Null1", 00:08:02.654 "name": "Null1", 00:08:02.654 "nguid": "73D549E638764FC8AF419DC2B10D0E5F", 00:08:02.654 "uuid": "73d549e6-3876-4fc8-af41-9dc2b10d0e5f" 00:08:02.654 } 00:08:02.654 ] 00:08:02.654 }, 00:08:02.654 { 00:08:02.654 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:02.654 "subtype": "NVMe", 00:08:02.654 "listen_addresses": [ 00:08:02.654 { 00:08:02.654 "trtype": "TCP", 00:08:02.654 "adrfam": "IPv4", 00:08:02.654 "traddr": "10.0.0.2", 00:08:02.654 "trsvcid": "4420" 00:08:02.654 } 00:08:02.654 ], 00:08:02.654 "allow_any_host": true, 00:08:02.654 "hosts": [], 00:08:02.654 "serial_number": "SPDK00000000000002", 00:08:02.654 "model_number": "SPDK bdev Controller", 00:08:02.654 "max_namespaces": 32, 00:08:02.654 "min_cntlid": 1, 00:08:02.654 "max_cntlid": 65519, 00:08:02.654 "namespaces": [ 00:08:02.654 { 00:08:02.654 "nsid": 1, 00:08:02.654 "bdev_name": "Null2", 00:08:02.654 "name": "Null2", 00:08:02.654 "nguid": "A9B873CC6FBC4C6782E7F7195F6C30A5", 00:08:02.654 "uuid": "a9b873cc-6fbc-4c67-82e7-f7195f6c30a5" 00:08:02.654 } 00:08:02.654 ] 00:08:02.654 }, 00:08:02.654 { 00:08:02.654 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:02.654 "subtype": "NVMe", 00:08:02.654 "listen_addresses": [ 00:08:02.654 { 00:08:02.654 "trtype": "TCP", 00:08:02.654 "adrfam": "IPv4", 00:08:02.654 "traddr": "10.0.0.2", 00:08:02.654 "trsvcid": "4420" 00:08:02.654 } 00:08:02.654 ], 00:08:02.654 "allow_any_host": true, 00:08:02.654 "hosts": [], 00:08:02.654 "serial_number": "SPDK00000000000003", 00:08:02.654 "model_number": "SPDK bdev Controller", 00:08:02.654 "max_namespaces": 32, 00:08:02.654 "min_cntlid": 1, 00:08:02.654 "max_cntlid": 65519, 00:08:02.654 "namespaces": [ 00:08:02.654 { 00:08:02.654 "nsid": 1, 00:08:02.654 "bdev_name": "Null3", 00:08:02.654 "name": "Null3", 00:08:02.654 "nguid": "6D63692370BA401B9BC760C1A42A6033", 00:08:02.654 "uuid": "6d636923-70ba-401b-9bc7-60c1a42a6033" 00:08:02.654 } 00:08:02.654 ] 00:08:02.654 }, 00:08:02.654 { 00:08:02.654 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:02.654 "subtype": "NVMe", 00:08:02.654 "listen_addresses": [ 00:08:02.654 { 00:08:02.654 "trtype": "TCP", 00:08:02.654 "adrfam": "IPv4", 00:08:02.654 "traddr": "10.0.0.2", 00:08:02.654 "trsvcid": "4420" 00:08:02.654 } 00:08:02.654 ], 00:08:02.654 "allow_any_host": true, 00:08:02.654 "hosts": [], 00:08:02.654 "serial_number": "SPDK00000000000004", 00:08:02.654 "model_number": "SPDK bdev Controller", 00:08:02.654 "max_namespaces": 32, 00:08:02.654 "min_cntlid": 1, 00:08:02.654 "max_cntlid": 65519, 00:08:02.654 "namespaces": [ 00:08:02.654 { 00:08:02.654 "nsid": 1, 00:08:02.654 "bdev_name": "Null4", 00:08:02.654 "name": "Null4", 00:08:02.654 "nguid": "778CA283BC214D6482A6155D5F3D5368", 00:08:02.654 "uuid": "778ca283-bc21-4d64-82a6-155d5f3d5368" 00:08:02.654 } 00:08:02.654 ] 00:08:02.654 } 00:08:02.654 ] 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.654 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.655 rmmod nvme_tcp 00:08:02.655 rmmod nvme_fabrics 00:08:02.655 rmmod nvme_keyring 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4076658 ']' 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4076658 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 4076658 ']' 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 4076658 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.655 20:06:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4076658 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4076658' 00:08:02.922 killing process with pid 4076658 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 4076658 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 4076658 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.922 20:06:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.461 20:06:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.461 00:08:05.461 real 0m9.709s 00:08:05.461 user 0m8.080s 00:08:05.461 sys 0m4.705s 00:08:05.461 20:06:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.461 20:06:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:05.461 ************************************ 00:08:05.461 END TEST nvmf_target_discovery 00:08:05.461 ************************************ 00:08:05.461 20:06:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:05.461 20:06:30 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:05.461 20:06:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.461 20:06:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.461 20:06:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.461 ************************************ 00:08:05.461 START TEST nvmf_referrals 00:08:05.461 ************************************ 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:05.461 * Looking for test storage... 00:08:05.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.461 20:06:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.462 20:06:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.732 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:10.733 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:10.733 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:10.733 Found net devices under 0000:af:00.0: cvl_0_0 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:10.733 Found net devices under 0000:af:00.1: cvl_0_1 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.733 20:06:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:08:10.992 00:08:10.992 --- 10.0.0.2 ping statistics --- 00:08:10.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.992 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:08:10.992 00:08:10.992 --- 10.0.0.1 ping statistics --- 00:08:10.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.992 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4080626 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4080626 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 4080626 ']' 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.992 20:06:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.992 [2024-07-15 20:06:36.318491] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:08:10.992 [2024-07-15 20:06:36.318548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.251 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.251 [2024-07-15 20:06:36.404422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.251 [2024-07-15 20:06:36.496196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.251 [2024-07-15 20:06:36.496238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.251 [2024-07-15 20:06:36.496248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.251 [2024-07-15 20:06:36.496262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.251 [2024-07-15 20:06:36.496270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.251 [2024-07-15 20:06:36.500275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.251 [2024-07-15 20:06:36.500296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.251 [2024-07-15 20:06:36.500390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.251 [2024-07-15 20:06:36.500393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 [2024-07-15 20:06:37.313866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 [2024-07-15 20:06:37.330094] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.189 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.449 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.709 20:06:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.968 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.227 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.228 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.487 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.746 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.747 20:06:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.747 rmmod nvme_tcp 00:08:13.747 rmmod nvme_fabrics 00:08:13.747 rmmod nvme_keyring 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4080626 ']' 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4080626 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 4080626 ']' 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 4080626 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4080626 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4080626' 00:08:13.747 killing process with pid 4080626 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 4080626 00:08:13.747 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 4080626 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.006 20:06:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.542 20:06:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.542 00:08:16.542 real 0m10.973s 00:08:16.542 user 0m13.559s 00:08:16.542 sys 0m5.133s 00:08:16.542 20:06:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.542 20:06:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 ************************************ 00:08:16.542 END TEST nvmf_referrals 00:08:16.543 ************************************ 00:08:16.543 20:06:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.543 20:06:41 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.543 20:06:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.543 20:06:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.543 20:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.543 ************************************ 00:08:16.543 START TEST nvmf_connect_disconnect 00:08:16.543 ************************************ 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:16.543 * Looking for test storage... 00:08:16.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.543 20:06:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:21.820 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:21.820 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:21.820 Found net devices under 0000:af:00.0: cvl_0_0 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:21.820 Found net devices under 0000:af:00.1: cvl_0_1 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.820 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:08:22.080 00:08:22.080 --- 10.0.0.2 ping statistics --- 00:08:22.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.080 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:08:22.080 00:08:22.080 --- 10.0.0.1 ping statistics --- 00:08:22.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.080 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:22.080 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4084778 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4084778 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 4084778 ']' 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.339 20:06:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:22.339 [2024-07-15 20:06:47.493116] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:08:22.339 [2024-07-15 20:06:47.493172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.339 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.339 [2024-07-15 20:06:47.578723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.339 [2024-07-15 20:06:47.671185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.339 [2024-07-15 20:06:47.671225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.339 [2024-07-15 20:06:47.671235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.339 [2024-07-15 20:06:47.671245] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.339 [2024-07-15 20:06:47.671252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.339 [2024-07-15 20:06:47.671305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.339 [2024-07-15 20:06:47.671407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.339 [2024-07-15 20:06:47.671503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.339 [2024-07-15 20:06:47.671505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 [2024-07-15 20:06:48.488300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 [2024-07-15 20:06:48.544333] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:23.278 20:06:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:27.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.610 rmmod nvme_tcp 00:08:40.610 rmmod nvme_fabrics 00:08:40.610 rmmod nvme_keyring 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4084778 ']' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4084778 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4084778 ']' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 4084778 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4084778 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4084778' 00:08:40.610 killing process with pid 4084778 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 4084778 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 4084778 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.610 20:07:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.145 20:07:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.145 00:08:43.145 real 0m26.556s 00:08:43.145 user 1m14.127s 00:08:43.145 sys 0m5.738s 00:08:43.145 20:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.145 20:07:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.145 ************************************ 00:08:43.145 END TEST nvmf_connect_disconnect 00:08:43.145 ************************************ 00:08:43.145 20:07:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.145 20:07:07 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:43.145 20:07:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.145 20:07:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.145 20:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.145 ************************************ 00:08:43.145 START TEST nvmf_multitarget 00:08:43.145 ************************************ 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:43.145 * Looking for test storage... 00:08:43.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.145 20:07:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:48.415 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:48.415 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.415 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:48.416 Found net devices under 0000:af:00.0: cvl_0_0 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:48.416 Found net devices under 0000:af:00.1: cvl_0_1 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:48.416 00:08:48.416 --- 10.0.0.2 ping statistics --- 00:08:48.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.416 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:48.416 00:08:48.416 --- 10.0.0.1 ping statistics --- 00:08:48.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.416 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4091779 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4091779 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4091779 ']' 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.416 20:07:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.675 [2024-07-15 20:07:13.777791] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:08:48.675 [2024-07-15 20:07:13.777849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.675 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.675 [2024-07-15 20:07:13.864070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.675 [2024-07-15 20:07:13.955783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.675 [2024-07-15 20:07:13.955825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.675 [2024-07-15 20:07:13.955836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.675 [2024-07-15 20:07:13.955845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.675 [2024-07-15 20:07:13.955852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.675 [2024-07-15 20:07:13.955899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.675 [2024-07-15 20:07:13.955999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.675 [2024-07-15 20:07:13.956090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.675 [2024-07-15 20:07:13.956092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:49.609 "nvmf_tgt_1" 00:08:49.609 20:07:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:49.866 "nvmf_tgt_2" 00:08:49.866 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:49.866 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:49.866 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:49.866 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:50.146 true 00:08:50.146 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:50.146 true 00:08:50.146 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.146 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.450 rmmod nvme_tcp 00:08:50.450 rmmod nvme_fabrics 00:08:50.450 rmmod nvme_keyring 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4091779 ']' 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4091779 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4091779 ']' 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4091779 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091779 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091779' 00:08:50.450 killing process with pid 4091779 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4091779 00:08:50.450 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4091779 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.722 20:07:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.625 20:07:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.625 00:08:52.625 real 0m9.943s 00:08:52.625 user 0m10.250s 00:08:52.625 sys 0m4.715s 00:08:52.626 20:07:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.626 20:07:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:52.626 ************************************ 00:08:52.626 END TEST nvmf_multitarget 00:08:52.626 ************************************ 00:08:52.884 20:07:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.884 20:07:18 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:52.884 20:07:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.884 20:07:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.884 20:07:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.884 ************************************ 00:08:52.884 START TEST nvmf_rpc 00:08:52.884 ************************************ 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:52.884 * Looking for test storage... 00:08:52.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.884 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.885 20:07:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:59.473 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:59.473 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:59.473 Found net devices under 0000:af:00.0: cvl_0_0 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:59.473 Found net devices under 0000:af:00.1: cvl_0_1 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:59.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:08:59.473 00:08:59.473 --- 10.0.0.2 ping statistics --- 00:08:59.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.473 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:08:59.473 00:08:59.473 --- 10.0.0.1 ping statistics --- 00:08:59.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.473 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:59.473 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4095805 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4095805 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4095805 ']' 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.474 20:07:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 [2024-07-15 20:07:24.037469] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:08:59.474 [2024-07-15 20:07:24.037530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.474 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.474 [2024-07-15 20:07:24.122711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.474 [2024-07-15 20:07:24.217485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.474 [2024-07-15 20:07:24.217520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.474 [2024-07-15 20:07:24.217530] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.474 [2024-07-15 20:07:24.217539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.474 [2024-07-15 20:07:24.217546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.474 [2024-07-15 20:07:24.217600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.474 [2024-07-15 20:07:24.217723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.474 [2024-07-15 20:07:24.217838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.474 [2024-07-15 20:07:24.217840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:59.732 "tick_rate": 2200000000, 00:08:59.732 "poll_groups": [ 00:08:59.732 { 00:08:59.732 "name": "nvmf_tgt_poll_group_000", 00:08:59.732 "admin_qpairs": 0, 00:08:59.732 "io_qpairs": 0, 00:08:59.732 "current_admin_qpairs": 0, 00:08:59.732 "current_io_qpairs": 0, 00:08:59.732 "pending_bdev_io": 0, 00:08:59.732 "completed_nvme_io": 0, 00:08:59.732 "transports": [] 00:08:59.732 }, 00:08:59.732 { 00:08:59.732 "name": "nvmf_tgt_poll_group_001", 00:08:59.732 "admin_qpairs": 0, 00:08:59.732 "io_qpairs": 0, 00:08:59.732 "current_admin_qpairs": 0, 00:08:59.732 "current_io_qpairs": 0, 00:08:59.732 "pending_bdev_io": 0, 00:08:59.732 "completed_nvme_io": 0, 00:08:59.732 "transports": [] 00:08:59.732 }, 00:08:59.732 { 00:08:59.732 "name": "nvmf_tgt_poll_group_002", 00:08:59.732 "admin_qpairs": 0, 00:08:59.732 "io_qpairs": 0, 00:08:59.732 "current_admin_qpairs": 0, 00:08:59.732 "current_io_qpairs": 0, 00:08:59.732 "pending_bdev_io": 0, 00:08:59.732 "completed_nvme_io": 0, 00:08:59.732 "transports": [] 00:08:59.732 }, 00:08:59.732 { 00:08:59.732 "name": "nvmf_tgt_poll_group_003", 00:08:59.732 "admin_qpairs": 0, 00:08:59.732 "io_qpairs": 0, 00:08:59.732 "current_admin_qpairs": 0, 00:08:59.732 "current_io_qpairs": 0, 00:08:59.732 "pending_bdev_io": 0, 00:08:59.732 "completed_nvme_io": 0, 00:08:59.732 "transports": [] 00:08:59.732 } 00:08:59.732 ] 00:08:59.732 }' 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:59.732 20:07:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.732 [2024-07-15 20:07:25.061363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.732 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:59.990 "tick_rate": 2200000000, 00:08:59.990 "poll_groups": [ 00:08:59.990 { 00:08:59.990 "name": "nvmf_tgt_poll_group_000", 00:08:59.990 "admin_qpairs": 0, 00:08:59.990 "io_qpairs": 0, 00:08:59.990 "current_admin_qpairs": 0, 00:08:59.990 "current_io_qpairs": 0, 00:08:59.990 "pending_bdev_io": 0, 00:08:59.990 "completed_nvme_io": 0, 00:08:59.990 "transports": [ 00:08:59.990 { 00:08:59.990 "trtype": "TCP" 00:08:59.990 } 00:08:59.990 ] 00:08:59.990 }, 00:08:59.990 { 00:08:59.990 "name": "nvmf_tgt_poll_group_001", 00:08:59.990 "admin_qpairs": 0, 00:08:59.990 "io_qpairs": 0, 00:08:59.990 "current_admin_qpairs": 0, 00:08:59.990 "current_io_qpairs": 0, 00:08:59.990 "pending_bdev_io": 0, 00:08:59.990 "completed_nvme_io": 0, 00:08:59.990 "transports": [ 00:08:59.990 { 00:08:59.990 "trtype": "TCP" 00:08:59.990 } 00:08:59.990 ] 00:08:59.990 }, 00:08:59.990 { 00:08:59.990 "name": "nvmf_tgt_poll_group_002", 00:08:59.990 "admin_qpairs": 0, 00:08:59.990 "io_qpairs": 0, 00:08:59.990 "current_admin_qpairs": 0, 00:08:59.990 "current_io_qpairs": 0, 00:08:59.990 "pending_bdev_io": 0, 00:08:59.990 "completed_nvme_io": 0, 00:08:59.990 "transports": [ 00:08:59.990 { 00:08:59.990 "trtype": "TCP" 00:08:59.990 } 00:08:59.990 ] 00:08:59.990 }, 00:08:59.990 { 00:08:59.990 "name": "nvmf_tgt_poll_group_003", 00:08:59.990 "admin_qpairs": 0, 00:08:59.990 "io_qpairs": 0, 00:08:59.990 "current_admin_qpairs": 0, 00:08:59.990 "current_io_qpairs": 0, 00:08:59.990 "pending_bdev_io": 0, 00:08:59.990 "completed_nvme_io": 0, 00:08:59.990 "transports": [ 00:08:59.990 { 00:08:59.990 "trtype": "TCP" 00:08:59.990 } 00:08:59.990 ] 00:08:59.990 } 00:08:59.990 ] 00:08:59.990 }' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.990 Malloc1 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:59.990 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.991 [2024-07-15 20:07:25.245748] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:59.991 [2024-07-15 20:07:25.270393] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:59.991 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:59.991 could not add new controller: failed to write to nvme-fabrics device 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.991 20:07:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.364 20:07:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.364 20:07:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:01.364 20:07:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.364 20:07:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:01.364 20:07:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.895 [2024-07-15 20:07:28.850451] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:09:03.895 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:03.895 could not add new controller: failed to write to nvme-fabrics device 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.895 20:07:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.844 20:07:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.844 20:07:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:04.844 20:07:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.844 20:07:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:04.844 20:07:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 [2024-07-15 20:07:32.331093] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.377 20:07:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.752 20:07:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.752 20:07:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:08.752 20:07:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.752 20:07:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:08.752 20:07:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:10.698 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.699 [2024-07-15 20:07:35.840011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.699 20:07:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.075 20:07:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.075 20:07:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.075 20:07:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.075 20:07:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:12.075 20:07:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:13.993 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.251 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.252 [2024-07-15 20:07:39.406563] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.252 20:07:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.638 20:07:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.638 20:07:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.638 20:07:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.638 20:07:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.638 20:07:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.540 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.798 [2024-07-15 20:07:42.919988] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.798 20:07:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.172 20:07:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.173 20:07:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.173 20:07:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.173 20:07:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.173 20:07:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.076 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.077 [2024-07-15 20:07:46.346683] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.077 20:07:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:22.454 20:07:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:22.454 20:07:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:22.454 20:07:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:22.454 20:07:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:22.454 20:07:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 [2024-07-15 20:07:49.856314] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.990 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 [2024-07-15 20:07:49.904451] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 [2024-07-15 20:07:49.956678] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 [2024-07-15 20:07:50.004846] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 [2024-07-15 20:07:50.057019] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.991 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:24.991 "tick_rate": 2200000000, 00:09:24.991 "poll_groups": [ 00:09:24.991 { 00:09:24.991 "name": "nvmf_tgt_poll_group_000", 00:09:24.991 "admin_qpairs": 2, 00:09:24.991 "io_qpairs": 196, 00:09:24.991 "current_admin_qpairs": 0, 00:09:24.991 "current_io_qpairs": 0, 00:09:24.991 "pending_bdev_io": 0, 00:09:24.991 "completed_nvme_io": 248, 00:09:24.991 "transports": [ 00:09:24.991 { 00:09:24.991 "trtype": "TCP" 00:09:24.991 } 00:09:24.991 ] 00:09:24.991 }, 00:09:24.991 { 00:09:24.991 "name": "nvmf_tgt_poll_group_001", 00:09:24.991 "admin_qpairs": 2, 00:09:24.991 "io_qpairs": 196, 00:09:24.991 "current_admin_qpairs": 0, 00:09:24.991 "current_io_qpairs": 0, 00:09:24.991 "pending_bdev_io": 0, 00:09:24.991 "completed_nvme_io": 296, 00:09:24.991 "transports": [ 00:09:24.991 { 00:09:24.991 "trtype": "TCP" 00:09:24.991 } 00:09:24.991 ] 00:09:24.991 }, 00:09:24.991 { 00:09:24.991 "name": "nvmf_tgt_poll_group_002", 00:09:24.991 "admin_qpairs": 1, 00:09:24.991 "io_qpairs": 196, 00:09:24.991 "current_admin_qpairs": 0, 00:09:24.991 "current_io_qpairs": 0, 00:09:24.991 "pending_bdev_io": 0, 00:09:24.991 "completed_nvme_io": 295, 00:09:24.991 "transports": [ 00:09:24.991 { 00:09:24.991 "trtype": "TCP" 00:09:24.991 } 00:09:24.991 ] 00:09:24.991 }, 00:09:24.991 { 00:09:24.991 "name": "nvmf_tgt_poll_group_003", 00:09:24.991 "admin_qpairs": 2, 00:09:24.991 "io_qpairs": 196, 00:09:24.991 "current_admin_qpairs": 0, 00:09:24.991 "current_io_qpairs": 0, 00:09:24.991 "pending_bdev_io": 0, 00:09:24.991 "completed_nvme_io": 295, 00:09:24.991 "transports": [ 00:09:24.991 { 00:09:24.991 "trtype": "TCP" 00:09:24.991 } 00:09:24.991 ] 00:09:24.991 } 00:09:24.991 ] 00:09:24.991 }' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.992 rmmod nvme_tcp 00:09:24.992 rmmod nvme_fabrics 00:09:24.992 rmmod nvme_keyring 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4095805 ']' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4095805 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4095805 ']' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4095805 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4095805 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4095805' 00:09:24.992 killing process with pid 4095805 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4095805 00:09:24.992 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4095805 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.251 20:07:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.811 20:07:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.811 00:09:27.811 real 0m34.576s 00:09:27.811 user 1m46.581s 00:09:27.811 sys 0m6.329s 00:09:27.811 20:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.811 20:07:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.811 ************************************ 00:09:27.811 END TEST nvmf_rpc 00:09:27.811 ************************************ 00:09:27.811 20:07:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.811 20:07:52 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:27.811 20:07:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.811 20:07:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.811 20:07:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.811 ************************************ 00:09:27.811 START TEST nvmf_invalid 00:09:27.811 ************************************ 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:27.811 * Looking for test storage... 00:09:27.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:27.811 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.812 20:07:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.087 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.087 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.087 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.087 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.087 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:33.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:09:33.088 00:09:33.088 --- 10.0.0.2 ping statistics --- 00:09:33.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.088 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:09:33.088 00:09:33.088 --- 10.0.0.1 ping statistics --- 00:09:33.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.088 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4104399 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4104399 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4104399 ']' 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.088 20:07:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.347 [2024-07-15 20:07:58.458261] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:09:33.347 [2024-07-15 20:07:58.458323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.347 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.347 [2024-07-15 20:07:58.544974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.347 [2024-07-15 20:07:58.636148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.347 [2024-07-15 20:07:58.636192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.347 [2024-07-15 20:07:58.636203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.347 [2024-07-15 20:07:58.636212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.347 [2024-07-15 20:07:58.636219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.347 [2024-07-15 20:07:58.636266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.347 [2024-07-15 20:07:58.636366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.347 [2024-07-15 20:07:58.636457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.347 [2024-07-15 20:07:58.636459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:34.285 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19160 00:09:34.545 [2024-07-15 20:07:59.663865] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:34.545 { 00:09:34.545 "nqn": "nqn.2016-06.io.spdk:cnode19160", 00:09:34.545 "tgt_name": "foobar", 00:09:34.545 "method": "nvmf_create_subsystem", 00:09:34.545 "req_id": 1 00:09:34.545 } 00:09:34.545 Got JSON-RPC error response 00:09:34.545 response: 00:09:34.545 { 00:09:34.545 "code": -32603, 00:09:34.545 "message": "Unable to find target foobar" 00:09:34.545 }' 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:34.545 { 00:09:34.545 "nqn": "nqn.2016-06.io.spdk:cnode19160", 00:09:34.545 "tgt_name": "foobar", 00:09:34.545 "method": "nvmf_create_subsystem", 00:09:34.545 "req_id": 1 00:09:34.545 } 00:09:34.545 Got JSON-RPC error response 00:09:34.545 response: 00:09:34.545 { 00:09:34.545 "code": -32603, 00:09:34.545 "message": "Unable to find target foobar" 00:09:34.545 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29266 00:09:34.545 [2024-07-15 20:07:59.852561] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29266: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:34.545 { 00:09:34.545 "nqn": "nqn.2016-06.io.spdk:cnode29266", 00:09:34.545 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:34.545 "method": "nvmf_create_subsystem", 00:09:34.545 "req_id": 1 00:09:34.545 } 00:09:34.545 Got JSON-RPC error response 00:09:34.545 response: 00:09:34.545 { 00:09:34.545 "code": -32602, 00:09:34.545 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:34.545 }' 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:34.545 { 00:09:34.545 "nqn": "nqn.2016-06.io.spdk:cnode29266", 00:09:34.545 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:34.545 "method": "nvmf_create_subsystem", 00:09:34.545 "req_id": 1 00:09:34.545 } 00:09:34.545 Got JSON-RPC error response 00:09:34.545 response: 00:09:34.545 { 00:09:34.545 "code": -32602, 00:09:34.545 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:34.545 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:34.545 20:07:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15147 00:09:34.806 [2024-07-15 20:08:00.121513] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15147: invalid model number 'SPDK_Controller' 00:09:35.092 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:35.092 { 00:09:35.092 "nqn": "nqn.2016-06.io.spdk:cnode15147", 00:09:35.092 "model_number": "SPDK_Controller\u001f", 00:09:35.092 "method": "nvmf_create_subsystem", 00:09:35.092 "req_id": 1 00:09:35.092 } 00:09:35.092 Got JSON-RPC error response 00:09:35.092 response: 00:09:35.092 { 00:09:35.092 "code": -32602, 00:09:35.092 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.093 }' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:35.093 { 00:09:35.093 "nqn": "nqn.2016-06.io.spdk:cnode15147", 00:09:35.093 "model_number": "SPDK_Controller\u001f", 00:09:35.093 "method": "nvmf_create_subsystem", 00:09:35.093 "req_id": 1 00:09:35.093 } 00:09:35.093 Got JSON-RPC error response 00:09:35.093 response: 00:09:35.093 { 00:09:35.093 "code": -32602, 00:09:35.093 "message": "Invalid MN SPDK_Controller\u001f" 00:09:35.093 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q%R.}|f\,3"v:=t$KMdq;' 00:09:35.093 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Q%R.}|f\,3"v:=t$KMdq;' nqn.2016-06.io.spdk:cnode6459 00:09:35.380 [2024-07-15 20:08:00.522963] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6459: invalid serial number 'Q%R.}|f\,3"v:=t$KMdq;' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:35.380 { 00:09:35.380 "nqn": "nqn.2016-06.io.spdk:cnode6459", 00:09:35.380 "serial_number": "Q%R.}|f\\,3\"v:=t$KMdq;", 00:09:35.380 "method": "nvmf_create_subsystem", 00:09:35.380 "req_id": 1 00:09:35.380 } 00:09:35.380 Got JSON-RPC error response 00:09:35.380 response: 00:09:35.380 { 00:09:35.380 "code": -32602, 00:09:35.380 "message": "Invalid SN Q%R.}|f\\,3\"v:=t$KMdq;" 00:09:35.380 }' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:35.380 { 00:09:35.380 "nqn": "nqn.2016-06.io.spdk:cnode6459", 00:09:35.380 "serial_number": "Q%R.}|f\\,3\"v:=t$KMdq;", 00:09:35.380 "method": "nvmf_create_subsystem", 00:09:35.380 "req_id": 1 00:09:35.380 } 00:09:35.380 Got JSON-RPC error response 00:09:35.380 response: 00:09:35.380 { 00:09:35.380 "code": -32602, 00:09:35.380 "message": "Invalid SN Q%R.}|f\\,3\"v:=t$KMdq;" 00:09:35.380 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:35.380 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.381 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.640 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\e`sSLYU' 00:09:35.641 20:08:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\e`sSLYU' nqn.2016-06.io.spdk:cnode24395 00:09:35.900 [2024-07-15 20:08:01.052827] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24395: invalid model number 'qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\e`sSLYU' 00:09:35.900 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:35.900 { 00:09:35.900 "nqn": "nqn.2016-06.io.spdk:cnode24395", 00:09:35.900 "model_number": "qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\\e`sSLYU", 00:09:35.900 "method": "nvmf_create_subsystem", 00:09:35.900 "req_id": 1 00:09:35.900 } 00:09:35.900 Got JSON-RPC error response 00:09:35.900 response: 00:09:35.900 { 00:09:35.900 "code": -32602, 00:09:35.900 "message": "Invalid MN qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\\e`sSLYU" 00:09:35.900 }' 00:09:35.900 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:35.900 { 00:09:35.900 "nqn": "nqn.2016-06.io.spdk:cnode24395", 00:09:35.900 "model_number": "qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\\e`sSLYU", 00:09:35.900 "method": "nvmf_create_subsystem", 00:09:35.900 "req_id": 1 00:09:35.900 } 00:09:35.900 Got JSON-RPC error response 00:09:35.900 response: 00:09:35.900 { 00:09:35.900 "code": -32602, 00:09:35.900 "message": "Invalid MN qxmTpvz>vAG$V?MdiG5]$h^[-3x40iDa4\\e`sSLYU" 00:09:35.900 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:35.900 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:36.159 [2024-07-15 20:08:01.317865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.159 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:36.418 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:36.418 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:36.418 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:36.418 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:36.418 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:36.676 [2024-07-15 20:08:01.843864] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:36.676 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:36.676 { 00:09:36.676 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:36.676 "listen_address": { 00:09:36.676 "trtype": "tcp", 00:09:36.676 "traddr": "", 00:09:36.676 "trsvcid": "4421" 00:09:36.676 }, 00:09:36.677 "method": "nvmf_subsystem_remove_listener", 00:09:36.677 "req_id": 1 00:09:36.677 } 00:09:36.677 Got JSON-RPC error response 00:09:36.677 response: 00:09:36.677 { 00:09:36.677 "code": -32602, 00:09:36.677 "message": "Invalid parameters" 00:09:36.677 }' 00:09:36.677 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:36.677 { 00:09:36.677 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:36.677 "listen_address": { 00:09:36.677 "trtype": "tcp", 00:09:36.677 "traddr": "", 00:09:36.677 "trsvcid": "4421" 00:09:36.677 }, 00:09:36.677 "method": "nvmf_subsystem_remove_listener", 00:09:36.677 "req_id": 1 00:09:36.677 } 00:09:36.677 Got JSON-RPC error response 00:09:36.677 response: 00:09:36.677 { 00:09:36.677 "code": -32602, 00:09:36.677 "message": "Invalid parameters" 00:09:36.677 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:36.677 20:08:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19868 -i 0 00:09:36.934 [2024-07-15 20:08:02.108721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19868: invalid cntlid range [0-65519] 00:09:36.934 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:36.934 { 00:09:36.934 "nqn": "nqn.2016-06.io.spdk:cnode19868", 00:09:36.934 "min_cntlid": 0, 00:09:36.934 "method": "nvmf_create_subsystem", 00:09:36.934 "req_id": 1 00:09:36.934 } 00:09:36.934 Got JSON-RPC error response 00:09:36.934 response: 00:09:36.934 { 00:09:36.934 "code": -32602, 00:09:36.934 "message": "Invalid cntlid range [0-65519]" 00:09:36.934 }' 00:09:36.934 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:36.934 { 00:09:36.934 "nqn": "nqn.2016-06.io.spdk:cnode19868", 00:09:36.934 "min_cntlid": 0, 00:09:36.934 "method": "nvmf_create_subsystem", 00:09:36.934 "req_id": 1 00:09:36.935 } 00:09:36.935 Got JSON-RPC error response 00:09:36.935 response: 00:09:36.935 { 00:09:36.935 "code": -32602, 00:09:36.935 "message": "Invalid cntlid range [0-65519]" 00:09:36.935 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.935 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11372 -i 65520 00:09:37.192 [2024-07-15 20:08:02.373670] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11372: invalid cntlid range [65520-65519] 00:09:37.192 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:37.192 { 00:09:37.192 "nqn": "nqn.2016-06.io.spdk:cnode11372", 00:09:37.192 "min_cntlid": 65520, 00:09:37.192 "method": "nvmf_create_subsystem", 00:09:37.192 "req_id": 1 00:09:37.192 } 00:09:37.192 Got JSON-RPC error response 00:09:37.192 response: 00:09:37.192 { 00:09:37.192 "code": -32602, 00:09:37.192 "message": "Invalid cntlid range [65520-65519]" 00:09:37.192 }' 00:09:37.192 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:37.192 { 00:09:37.192 "nqn": "nqn.2016-06.io.spdk:cnode11372", 00:09:37.192 "min_cntlid": 65520, 00:09:37.192 "method": "nvmf_create_subsystem", 00:09:37.192 "req_id": 1 00:09:37.192 } 00:09:37.192 Got JSON-RPC error response 00:09:37.192 response: 00:09:37.192 { 00:09:37.192 "code": -32602, 00:09:37.192 "message": "Invalid cntlid range [65520-65519]" 00:09:37.192 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.192 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9461 -I 0 00:09:37.450 [2024-07-15 20:08:02.638635] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9461: invalid cntlid range [1-0] 00:09:37.450 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:37.450 { 00:09:37.450 "nqn": "nqn.2016-06.io.spdk:cnode9461", 00:09:37.450 "max_cntlid": 0, 00:09:37.450 "method": "nvmf_create_subsystem", 00:09:37.450 "req_id": 1 00:09:37.450 } 00:09:37.450 Got JSON-RPC error response 00:09:37.450 response: 00:09:37.450 { 00:09:37.450 "code": -32602, 00:09:37.450 "message": "Invalid cntlid range [1-0]" 00:09:37.450 }' 00:09:37.450 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:37.450 { 00:09:37.450 "nqn": "nqn.2016-06.io.spdk:cnode9461", 00:09:37.450 "max_cntlid": 0, 00:09:37.450 "method": "nvmf_create_subsystem", 00:09:37.450 "req_id": 1 00:09:37.450 } 00:09:37.450 Got JSON-RPC error response 00:09:37.450 response: 00:09:37.450 { 00:09:37.450 "code": -32602, 00:09:37.450 "message": "Invalid cntlid range [1-0]" 00:09:37.450 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.450 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12679 -I 65520 00:09:37.708 [2024-07-15 20:08:02.899609] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12679: invalid cntlid range [1-65520] 00:09:37.708 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:37.708 { 00:09:37.708 "nqn": "nqn.2016-06.io.spdk:cnode12679", 00:09:37.708 "max_cntlid": 65520, 00:09:37.708 "method": "nvmf_create_subsystem", 00:09:37.708 "req_id": 1 00:09:37.708 } 00:09:37.708 Got JSON-RPC error response 00:09:37.708 response: 00:09:37.708 { 00:09:37.708 "code": -32602, 00:09:37.708 "message": "Invalid cntlid range [1-65520]" 00:09:37.708 }' 00:09:37.708 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:37.708 { 00:09:37.708 "nqn": "nqn.2016-06.io.spdk:cnode12679", 00:09:37.708 "max_cntlid": 65520, 00:09:37.708 "method": "nvmf_create_subsystem", 00:09:37.708 "req_id": 1 00:09:37.708 } 00:09:37.708 Got JSON-RPC error response 00:09:37.708 response: 00:09:37.708 { 00:09:37.708 "code": -32602, 00:09:37.708 "message": "Invalid cntlid range [1-65520]" 00:09:37.708 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.708 20:08:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6540 -i 6 -I 5 00:09:37.966 [2024-07-15 20:08:03.160635] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6540: invalid cntlid range [6-5] 00:09:37.966 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:37.966 { 00:09:37.966 "nqn": "nqn.2016-06.io.spdk:cnode6540", 00:09:37.966 "min_cntlid": 6, 00:09:37.966 "max_cntlid": 5, 00:09:37.966 "method": "nvmf_create_subsystem", 00:09:37.966 "req_id": 1 00:09:37.966 } 00:09:37.966 Got JSON-RPC error response 00:09:37.966 response: 00:09:37.966 { 00:09:37.966 "code": -32602, 00:09:37.966 "message": "Invalid cntlid range [6-5]" 00:09:37.966 }' 00:09:37.966 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:37.966 { 00:09:37.966 "nqn": "nqn.2016-06.io.spdk:cnode6540", 00:09:37.966 "min_cntlid": 6, 00:09:37.966 "max_cntlid": 5, 00:09:37.966 "method": "nvmf_create_subsystem", 00:09:37.966 "req_id": 1 00:09:37.966 } 00:09:37.966 Got JSON-RPC error response 00:09:37.966 response: 00:09:37.966 { 00:09:37.966 "code": -32602, 00:09:37.966 "message": "Invalid cntlid range [6-5]" 00:09:37.966 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:37.966 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:38.224 { 00:09:38.224 "name": "foobar", 00:09:38.224 "method": "nvmf_delete_target", 00:09:38.224 "req_id": 1 00:09:38.224 } 00:09:38.224 Got JSON-RPC error response 00:09:38.224 response: 00:09:38.224 { 00:09:38.224 "code": -32602, 00:09:38.224 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:38.224 }' 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:38.224 { 00:09:38.224 "name": "foobar", 00:09:38.224 "method": "nvmf_delete_target", 00:09:38.224 "req_id": 1 00:09:38.224 } 00:09:38.224 Got JSON-RPC error response 00:09:38.224 response: 00:09:38.224 { 00:09:38.224 "code": -32602, 00:09:38.224 "message": "The specified target doesn't exist, cannot delete it." 00:09:38.224 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.224 rmmod nvme_tcp 00:09:38.224 rmmod nvme_fabrics 00:09:38.224 rmmod nvme_keyring 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 4104399 ']' 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 4104399 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 4104399 ']' 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 4104399 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104399 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104399' 00:09:38.224 killing process with pid 4104399 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 4104399 00:09:38.224 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 4104399 00:09:38.481 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.481 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.481 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.481 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.481 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.482 20:08:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.482 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.482 20:08:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.383 20:08:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.383 00:09:40.383 real 0m13.043s 00:09:40.383 user 0m24.802s 00:09:40.383 sys 0m5.325s 00:09:40.383 20:08:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.383 20:08:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:40.383 ************************************ 00:09:40.383 END TEST nvmf_invalid 00:09:40.383 ************************************ 00:09:40.642 20:08:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.642 20:08:05 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:40.642 20:08:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.642 20:08:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.642 20:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.642 ************************************ 00:09:40.642 START TEST nvmf_abort 00:09:40.642 ************************************ 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:40.642 * Looking for test storage... 00:09:40.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.642 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.643 20:08:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.236 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.236 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:47.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:47.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:47.237 Found net devices under 0000:af:00.0: cvl_0_0 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:47.237 Found net devices under 0000:af:00.1: cvl_0_1 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:09:47.237 00:09:47.237 --- 10.0.0.2 ping statistics --- 00:09:47.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.237 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:09:47.237 00:09:47.237 --- 10.0.0.1 ping statistics --- 00:09:47.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.237 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4109097 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4109097 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4109097 ']' 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.237 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.238 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.238 20:08:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 [2024-07-15 20:08:11.775242] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:09:47.238 [2024-07-15 20:08:11.775316] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.238 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.238 [2024-07-15 20:08:11.854438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.238 [2024-07-15 20:08:11.941917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.238 [2024-07-15 20:08:11.941961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.238 [2024-07-15 20:08:11.941976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.238 [2024-07-15 20:08:11.941985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.238 [2024-07-15 20:08:11.941993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.238 [2024-07-15 20:08:11.942101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.238 [2024-07-15 20:08:11.942181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.238 [2024-07-15 20:08:11.942184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 [2024-07-15 20:08:12.099269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 Malloc0 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 Delay0 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 [2024-07-15 20:08:12.174242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.238 20:08:12 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:47.238 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.238 [2024-07-15 20:08:12.294415] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:49.139 Initializing NVMe Controllers 00:09:49.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:49.139 controller IO queue size 128 less than required 00:09:49.139 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:49.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:49.139 Initialization complete. Launching workers. 00:09:49.139 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 29057 00:09:49.139 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29121, failed to submit 62 00:09:49.139 success 29061, unsuccess 60, failed 0 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.139 rmmod nvme_tcp 00:09:49.139 rmmod nvme_fabrics 00:09:49.139 rmmod nvme_keyring 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4109097 ']' 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4109097 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4109097 ']' 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4109097 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4109097 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4109097' 00:09:49.139 killing process with pid 4109097 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4109097 00:09:49.139 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4109097 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.398 20:08:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.935 20:08:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:51.935 00:09:51.935 real 0m10.956s 00:09:51.935 user 0m11.438s 00:09:51.935 sys 0m5.250s 00:09:51.935 20:08:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.935 20:08:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.935 ************************************ 00:09:51.935 END TEST nvmf_abort 00:09:51.935 ************************************ 00:09:51.935 20:08:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:51.935 20:08:16 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:51.935 20:08:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:51.935 20:08:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.935 20:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.935 ************************************ 00:09:51.935 START TEST nvmf_ns_hotplug_stress 00:09:51.935 ************************************ 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:51.935 * Looking for test storage... 00:09:51.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:51.935 20:08:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.210 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:57.211 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:57.211 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:57.211 Found net devices under 0000:af:00.0: cvl_0_0 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:57.211 Found net devices under 0000:af:00.1: cvl_0_1 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.211 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:57.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:09:57.469 00:09:57.469 --- 10.0.0.2 ping statistics --- 00:09:57.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.469 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:09:57.469 00:09:57.469 --- 10.0.0.1 ping statistics --- 00:09:57.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.469 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4113106 00:09:57.469 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4113106 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4113106 ']' 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.470 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.470 [2024-07-15 20:08:22.675647] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:09:57.470 [2024-07-15 20:08:22.675702] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.470 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.470 [2024-07-15 20:08:22.751940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.728 [2024-07-15 20:08:22.843635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.728 [2024-07-15 20:08:22.843677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.728 [2024-07-15 20:08:22.843687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.728 [2024-07-15 20:08:22.843696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.728 [2024-07-15 20:08:22.843703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.728 [2024-07-15 20:08:22.843806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.728 [2024-07-15 20:08:22.843907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.728 [2024-07-15 20:08:22.843910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:57.728 20:08:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:57.986 [2024-07-15 20:08:23.209811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.986 20:08:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:58.244 20:08:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.502 [2024-07-15 20:08:23.728639] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.502 20:08:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.760 20:08:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:59.018 Malloc0 00:09:59.018 20:08:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:59.277 Delay0 00:09:59.277 20:08:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.536 20:08:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:59.795 NULL1 00:09:59.795 20:08:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:00.054 20:08:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4113658 00:10:00.054 20:08:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:00.054 20:08:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:00.054 20:08:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.054 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.431 Read completed with error (sct=0, sc=11) 00:10:01.431 20:08:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.690 20:08:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:01.690 20:08:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:01.690 true 00:10:01.949 20:08:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:01.949 20:08:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.538 20:08:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.797 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:02.797 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:03.056 true 00:10:03.056 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:03.056 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.314 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.573 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:03.573 20:08:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:03.833 true 00:10:03.833 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:03.833 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.091 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.361 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:04.361 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:04.623 true 00:10:04.623 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:04.623 20:08:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.000 20:08:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.000 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:06.000 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:06.258 true 00:10:06.258 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:06.258 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.517 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.777 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:06.777 20:08:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:07.035 true 00:10:07.035 20:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:07.035 20:08:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.972 20:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.230 20:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:08.230 20:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:08.230 true 00:10:08.487 20:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:08.487 20:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.745 20:08:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.004 20:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:09.004 20:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:09.004 true 00:10:09.263 20:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:09.263 20:08:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.200 20:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.200 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.459 20:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:10.459 20:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:10.459 true 00:10:10.459 20:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:10.459 20:08:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.718 20:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.976 20:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:10.976 20:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:11.235 true 00:10:11.235 20:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:11.235 20:08:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.173 20:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.432 20:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:12.432 20:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:12.691 true 00:10:12.691 20:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:12.691 20:08:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.956 20:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.245 20:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:13.245 20:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:13.519 true 00:10:13.519 20:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:13.519 20:08:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.470 20:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.730 20:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:14.730 20:08:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:14.988 true 00:10:14.988 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:14.988 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.247 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.506 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:15.506 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:15.766 true 00:10:15.766 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:15.766 20:08:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.025 20:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.284 20:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:16.284 20:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:16.543 true 00:10:16.543 20:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:16.543 20:08:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.475 20:08:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.732 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:17.732 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:17.990 true 00:10:17.990 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:17.990 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.247 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.505 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:18.505 20:08:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:18.762 true 00:10:18.762 20:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:18.762 20:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.700 20:08:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.956 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:19.956 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:20.212 true 00:10:20.212 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:20.212 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.469 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.725 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:20.725 20:08:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:20.982 true 00:10:20.982 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:20.982 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.240 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.498 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:21.498 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:21.756 true 00:10:21.756 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:21.756 20:08:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.691 20:08:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.949 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:22.949 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:23.207 true 00:10:23.207 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:23.207 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.465 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.723 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:23.723 20:08:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:23.980 true 00:10:23.980 20:08:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:23.980 20:08:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.916 20:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.174 20:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:25.174 20:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:25.431 true 00:10:25.431 20:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:25.431 20:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.690 20:08:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.948 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:25.948 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:26.206 true 00:10:26.206 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:26.206 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.464 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.723 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:26.723 20:08:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:26.981 true 00:10:26.981 20:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:26.981 20:08:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.917 20:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.176 20:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:28.176 20:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:28.434 true 00:10:28.434 20:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:28.434 20:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.693 20:08:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.952 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:28.952 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:29.211 true 00:10:29.211 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:29.211 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.471 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.730 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:29.730 20:08:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:29.989 true 00:10:29.989 20:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:29.989 20:08:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.925 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.184 Initializing NVMe Controllers 00:10:31.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.184 Controller IO queue size 128, less than required. 00:10:31.184 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:31.184 Controller IO queue size 128, less than required. 00:10:31.184 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:31.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:31.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:31.184 Initialization complete. Launching workers. 00:10:31.184 ======================================================== 00:10:31.184 Latency(us) 00:10:31.184 Device Information : IOPS MiB/s Average min max 00:10:31.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 601.97 0.29 104117.91 3556.27 1208501.97 00:10:31.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13244.02 6.47 9664.19 1825.39 570081.42 00:10:31.184 ======================================================== 00:10:31.184 Total : 13845.99 6.76 13770.65 1825.39 1208501.97 00:10:31.184 00:10:31.184 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:31.184 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:31.444 true 00:10:31.444 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4113658 00:10:31.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4113658) - No such process 00:10:31.444 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4113658 00:10:31.444 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.703 20:08:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.962 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:31.962 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:31.962 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:31.962 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:31.962 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:32.221 null0 00:10:32.221 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.221 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.221 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:32.480 null1 00:10:32.480 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.480 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.480 20:08:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:32.739 null2 00:10:32.739 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.739 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.739 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:32.998 null3 00:10:32.998 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:32.998 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:32.998 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:33.255 null4 00:10:33.255 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.255 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.255 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:33.514 null5 00:10:33.514 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.514 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.514 20:08:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:33.772 null6 00:10:33.772 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.772 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.772 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:34.031 null7 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:34.031 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4120005 4120007 4120010 4120013 4120016 4120019 4120021 4120024 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.032 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.290 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.548 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.549 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:34.807 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:34.807 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:34.807 20:08:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:34.807 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.065 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.323 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.580 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.839 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.839 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.839 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.839 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.839 20:09:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.839 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.097 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.355 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.611 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.871 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.871 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.871 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.871 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.871 20:09:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.871 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.128 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.385 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.642 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.642 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.642 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.643 20:09:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.901 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.158 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.158 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.158 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.158 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.158 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.158 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.159 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.159 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.159 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.159 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.159 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.416 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.674 20:09:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.932 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.190 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.190 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.190 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.190 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.190 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.190 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.448 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.705 20:09:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.964 rmmod nvme_tcp 00:10:39.964 rmmod nvme_fabrics 00:10:39.964 rmmod nvme_keyring 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4113106 ']' 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4113106 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4113106 ']' 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4113106 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4113106 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4113106' 00:10:39.964 killing process with pid 4113106 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4113106 00:10:39.964 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4113106 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.222 20:09:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.786 20:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.786 00:10:42.786 real 0m50.752s 00:10:42.786 user 3m37.332s 00:10:42.786 sys 0m15.642s 00:10:42.786 20:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.786 20:09:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.786 ************************************ 00:10:42.786 END TEST nvmf_ns_hotplug_stress 00:10:42.786 ************************************ 00:10:42.786 20:09:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:42.786 20:09:07 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.786 20:09:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:42.786 20:09:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.786 20:09:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.786 ************************************ 00:10:42.786 START TEST nvmf_connect_stress 00:10:42.786 ************************************ 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.786 * Looking for test storage... 00:10:42.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:42.786 20:09:07 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.787 20:09:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:48.060 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.060 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:48.061 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:48.061 Found net devices under 0000:af:00.0: cvl_0_0 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:48.061 Found net devices under 0000:af:00.1: cvl_0_1 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.061 20:09:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:10:48.061 00:10:48.061 --- 10.0.0.2 ping statistics --- 00:10:48.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.061 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:10:48.061 00:10:48.061 --- 10.0.0.1 ping statistics --- 00:10:48.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.061 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4125210 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4125210 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4125210 ']' 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.061 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.061 [2024-07-15 20:09:13.251664] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:10:48.061 [2024-07-15 20:09:13.251717] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.061 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.061 [2024-07-15 20:09:13.328058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.320 [2024-07-15 20:09:13.418843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.320 [2024-07-15 20:09:13.418887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.320 [2024-07-15 20:09:13.418898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.320 [2024-07-15 20:09:13.418906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.320 [2024-07-15 20:09:13.418913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.320 [2024-07-15 20:09:13.419019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.320 [2024-07-15 20:09:13.419119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.320 [2024-07-15 20:09:13.419122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.320 [2024-07-15 20:09:13.564228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.320 [2024-07-15 20:09:13.598391] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.320 NULL1 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4125390 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.320 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.321 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.579 20:09:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.837 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.837 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:48.837 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.837 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.837 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.096 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.096 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:49.096 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.096 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.096 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.354 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.354 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:49.354 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.354 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.354 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.921 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.921 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:49.921 20:09:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.921 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.921 20:09:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.181 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.181 20:09:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:50.181 20:09:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.181 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.181 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.440 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.440 20:09:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:50.440 20:09:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.440 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.440 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.700 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.700 20:09:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:50.700 20:09:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.700 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.700 20:09:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.959 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.959 20:09:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:50.959 20:09:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.959 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.959 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.526 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.526 20:09:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:51.526 20:09:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.526 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.526 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.785 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.785 20:09:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:51.785 20:09:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.785 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.785 20:09:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.043 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.043 20:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:52.043 20:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.043 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.043 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.301 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.301 20:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:52.301 20:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.301 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.301 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.865 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.865 20:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:52.865 20:09:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.865 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.865 20:09:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.122 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.122 20:09:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:53.122 20:09:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.122 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.122 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.379 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.379 20:09:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:53.379 20:09:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.379 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.379 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.637 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.637 20:09:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:53.637 20:09:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.637 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.637 20:09:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.896 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.896 20:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:53.896 20:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.896 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.896 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.462 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.462 20:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:54.462 20:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.462 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.462 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.720 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.720 20:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:54.720 20:09:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.720 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.720 20:09:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.979 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.979 20:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:54.979 20:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.979 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.979 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.237 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.238 20:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:55.238 20:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.238 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.238 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.804 20:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:55.804 20:09:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.804 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.804 20:09:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.062 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.062 20:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:56.062 20:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.062 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.062 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.320 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.320 20:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:56.320 20:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.320 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.320 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.578 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.578 20:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:56.578 20:09:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.578 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.578 20:09:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.836 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.836 20:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:56.836 20:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.836 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.836 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.402 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.402 20:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:57.402 20:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.402 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.402 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.660 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.660 20:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:57.660 20:09:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.660 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.660 20:09:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.919 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.919 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:57.919 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.919 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.919 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.177 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.177 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:58.177 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.177 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.177 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.435 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:58.435 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.435 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4125390 00:10:58.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4125390) - No such process 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4125390 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.695 rmmod nvme_tcp 00:10:58.695 rmmod nvme_fabrics 00:10:58.695 rmmod nvme_keyring 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4125210 ']' 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4125210 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4125210 ']' 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4125210 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4125210 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4125210' 00:10:58.695 killing process with pid 4125210 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4125210 00:10:58.695 20:09:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4125210 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.955 20:09:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.860 20:09:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.860 00:11:00.860 real 0m18.538s 00:11:00.860 user 0m39.729s 00:11:00.860 sys 0m7.722s 00:11:00.860 20:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.860 20:09:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 ************************************ 00:11:00.860 END TEST nvmf_connect_stress 00:11:00.860 ************************************ 00:11:01.119 20:09:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:01.119 20:09:26 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:01.119 20:09:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.119 20:09:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.119 20:09:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.119 ************************************ 00:11:01.119 START TEST nvmf_fused_ordering 00:11:01.119 ************************************ 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:01.119 * Looking for test storage... 00:11:01.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.119 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.120 20:09:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:06.396 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:06.396 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.396 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:06.397 Found net devices under 0000:af:00.0: cvl_0_0 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:06.397 Found net devices under 0000:af:00.1: cvl_0_1 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:06.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:11:06.397 00:11:06.397 --- 10.0.0.2 ping statistics --- 00:11:06.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.397 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:11:06.397 00:11:06.397 --- 10.0.0.1 ping statistics --- 00:11:06.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.397 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4130790 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4130790 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4130790 ']' 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.397 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.397 [2024-07-15 20:09:31.583470] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:11:06.397 [2024-07-15 20:09:31.583509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.397 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.397 [2024-07-15 20:09:31.646290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.397 [2024-07-15 20:09:31.731872] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.397 [2024-07-15 20:09:31.731918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.397 [2024-07-15 20:09:31.731928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.397 [2024-07-15 20:09:31.731937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.397 [2024-07-15 20:09:31.731945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.397 [2024-07-15 20:09:31.731971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 [2024-07-15 20:09:31.866733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 [2024-07-15 20:09:31.886907] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 NULL1 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.657 20:09:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:06.657 [2024-07-15 20:09:31.940427] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:11:06.657 [2024-07-15 20:09:31.940461] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130813 ] 00:11:06.657 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.225 Attached to nqn.2016-06.io.spdk:cnode1 00:11:07.225 Namespace ID: 1 size: 1GB 00:11:07.225 fused_ordering(0) 00:11:07.225 fused_ordering(1) 00:11:07.225 fused_ordering(2) 00:11:07.225 fused_ordering(3) 00:11:07.225 fused_ordering(4) 00:11:07.225 fused_ordering(5) 00:11:07.225 fused_ordering(6) 00:11:07.225 fused_ordering(7) 00:11:07.225 fused_ordering(8) 00:11:07.225 fused_ordering(9) 00:11:07.225 fused_ordering(10) 00:11:07.225 fused_ordering(11) 00:11:07.225 fused_ordering(12) 00:11:07.225 fused_ordering(13) 00:11:07.225 fused_ordering(14) 00:11:07.225 fused_ordering(15) 00:11:07.225 fused_ordering(16) 00:11:07.225 fused_ordering(17) 00:11:07.225 fused_ordering(18) 00:11:07.225 fused_ordering(19) 00:11:07.225 fused_ordering(20) 00:11:07.225 fused_ordering(21) 00:11:07.225 fused_ordering(22) 00:11:07.225 fused_ordering(23) 00:11:07.225 fused_ordering(24) 00:11:07.225 fused_ordering(25) 00:11:07.225 fused_ordering(26) 00:11:07.225 fused_ordering(27) 00:11:07.225 fused_ordering(28) 00:11:07.225 fused_ordering(29) 00:11:07.225 fused_ordering(30) 00:11:07.225 fused_ordering(31) 00:11:07.225 fused_ordering(32) 00:11:07.225 fused_ordering(33) 00:11:07.225 fused_ordering(34) 00:11:07.225 fused_ordering(35) 00:11:07.225 fused_ordering(36) 00:11:07.225 fused_ordering(37) 00:11:07.225 fused_ordering(38) 00:11:07.225 fused_ordering(39) 00:11:07.225 fused_ordering(40) 00:11:07.225 fused_ordering(41) 00:11:07.225 fused_ordering(42) 00:11:07.225 fused_ordering(43) 00:11:07.225 fused_ordering(44) 00:11:07.225 fused_ordering(45) 00:11:07.225 fused_ordering(46) 00:11:07.225 fused_ordering(47) 00:11:07.225 fused_ordering(48) 00:11:07.225 fused_ordering(49) 00:11:07.225 fused_ordering(50) 00:11:07.225 fused_ordering(51) 00:11:07.225 fused_ordering(52) 00:11:07.225 fused_ordering(53) 00:11:07.225 fused_ordering(54) 00:11:07.225 fused_ordering(55) 00:11:07.225 fused_ordering(56) 00:11:07.225 fused_ordering(57) 00:11:07.225 fused_ordering(58) 00:11:07.225 fused_ordering(59) 00:11:07.225 fused_ordering(60) 00:11:07.225 fused_ordering(61) 00:11:07.225 fused_ordering(62) 00:11:07.225 fused_ordering(63) 00:11:07.225 fused_ordering(64) 00:11:07.225 fused_ordering(65) 00:11:07.225 fused_ordering(66) 00:11:07.225 fused_ordering(67) 00:11:07.225 fused_ordering(68) 00:11:07.225 fused_ordering(69) 00:11:07.225 fused_ordering(70) 00:11:07.225 fused_ordering(71) 00:11:07.225 fused_ordering(72) 00:11:07.225 fused_ordering(73) 00:11:07.225 fused_ordering(74) 00:11:07.225 fused_ordering(75) 00:11:07.225 fused_ordering(76) 00:11:07.225 fused_ordering(77) 00:11:07.225 fused_ordering(78) 00:11:07.225 fused_ordering(79) 00:11:07.225 fused_ordering(80) 00:11:07.225 fused_ordering(81) 00:11:07.225 fused_ordering(82) 00:11:07.225 fused_ordering(83) 00:11:07.225 fused_ordering(84) 00:11:07.225 fused_ordering(85) 00:11:07.225 fused_ordering(86) 00:11:07.225 fused_ordering(87) 00:11:07.225 fused_ordering(88) 00:11:07.225 fused_ordering(89) 00:11:07.225 fused_ordering(90) 00:11:07.225 fused_ordering(91) 00:11:07.225 fused_ordering(92) 00:11:07.225 fused_ordering(93) 00:11:07.225 fused_ordering(94) 00:11:07.225 fused_ordering(95) 00:11:07.225 fused_ordering(96) 00:11:07.225 fused_ordering(97) 00:11:07.225 fused_ordering(98) 00:11:07.225 fused_ordering(99) 00:11:07.225 fused_ordering(100) 00:11:07.225 fused_ordering(101) 00:11:07.225 fused_ordering(102) 00:11:07.225 fused_ordering(103) 00:11:07.225 fused_ordering(104) 00:11:07.225 fused_ordering(105) 00:11:07.225 fused_ordering(106) 00:11:07.225 fused_ordering(107) 00:11:07.225 fused_ordering(108) 00:11:07.225 fused_ordering(109) 00:11:07.226 fused_ordering(110) 00:11:07.226 fused_ordering(111) 00:11:07.226 fused_ordering(112) 00:11:07.226 fused_ordering(113) 00:11:07.226 fused_ordering(114) 00:11:07.226 fused_ordering(115) 00:11:07.226 fused_ordering(116) 00:11:07.226 fused_ordering(117) 00:11:07.226 fused_ordering(118) 00:11:07.226 fused_ordering(119) 00:11:07.226 fused_ordering(120) 00:11:07.226 fused_ordering(121) 00:11:07.226 fused_ordering(122) 00:11:07.226 fused_ordering(123) 00:11:07.226 fused_ordering(124) 00:11:07.226 fused_ordering(125) 00:11:07.226 fused_ordering(126) 00:11:07.226 fused_ordering(127) 00:11:07.226 fused_ordering(128) 00:11:07.226 fused_ordering(129) 00:11:07.226 fused_ordering(130) 00:11:07.226 fused_ordering(131) 00:11:07.226 fused_ordering(132) 00:11:07.226 fused_ordering(133) 00:11:07.226 fused_ordering(134) 00:11:07.226 fused_ordering(135) 00:11:07.226 fused_ordering(136) 00:11:07.226 fused_ordering(137) 00:11:07.226 fused_ordering(138) 00:11:07.226 fused_ordering(139) 00:11:07.226 fused_ordering(140) 00:11:07.226 fused_ordering(141) 00:11:07.226 fused_ordering(142) 00:11:07.226 fused_ordering(143) 00:11:07.226 fused_ordering(144) 00:11:07.226 fused_ordering(145) 00:11:07.226 fused_ordering(146) 00:11:07.226 fused_ordering(147) 00:11:07.226 fused_ordering(148) 00:11:07.226 fused_ordering(149) 00:11:07.226 fused_ordering(150) 00:11:07.226 fused_ordering(151) 00:11:07.226 fused_ordering(152) 00:11:07.226 fused_ordering(153) 00:11:07.226 fused_ordering(154) 00:11:07.226 fused_ordering(155) 00:11:07.226 fused_ordering(156) 00:11:07.226 fused_ordering(157) 00:11:07.226 fused_ordering(158) 00:11:07.226 fused_ordering(159) 00:11:07.226 fused_ordering(160) 00:11:07.226 fused_ordering(161) 00:11:07.226 fused_ordering(162) 00:11:07.226 fused_ordering(163) 00:11:07.226 fused_ordering(164) 00:11:07.226 fused_ordering(165) 00:11:07.226 fused_ordering(166) 00:11:07.226 fused_ordering(167) 00:11:07.226 fused_ordering(168) 00:11:07.226 fused_ordering(169) 00:11:07.226 fused_ordering(170) 00:11:07.226 fused_ordering(171) 00:11:07.226 fused_ordering(172) 00:11:07.226 fused_ordering(173) 00:11:07.226 fused_ordering(174) 00:11:07.226 fused_ordering(175) 00:11:07.226 fused_ordering(176) 00:11:07.226 fused_ordering(177) 00:11:07.226 fused_ordering(178) 00:11:07.226 fused_ordering(179) 00:11:07.226 fused_ordering(180) 00:11:07.226 fused_ordering(181) 00:11:07.226 fused_ordering(182) 00:11:07.226 fused_ordering(183) 00:11:07.226 fused_ordering(184) 00:11:07.226 fused_ordering(185) 00:11:07.226 fused_ordering(186) 00:11:07.226 fused_ordering(187) 00:11:07.226 fused_ordering(188) 00:11:07.226 fused_ordering(189) 00:11:07.226 fused_ordering(190) 00:11:07.226 fused_ordering(191) 00:11:07.226 fused_ordering(192) 00:11:07.226 fused_ordering(193) 00:11:07.226 fused_ordering(194) 00:11:07.226 fused_ordering(195) 00:11:07.226 fused_ordering(196) 00:11:07.226 fused_ordering(197) 00:11:07.226 fused_ordering(198) 00:11:07.226 fused_ordering(199) 00:11:07.226 fused_ordering(200) 00:11:07.226 fused_ordering(201) 00:11:07.226 fused_ordering(202) 00:11:07.226 fused_ordering(203) 00:11:07.226 fused_ordering(204) 00:11:07.226 fused_ordering(205) 00:11:07.485 fused_ordering(206) 00:11:07.486 fused_ordering(207) 00:11:07.486 fused_ordering(208) 00:11:07.486 fused_ordering(209) 00:11:07.486 fused_ordering(210) 00:11:07.486 fused_ordering(211) 00:11:07.486 fused_ordering(212) 00:11:07.486 fused_ordering(213) 00:11:07.486 fused_ordering(214) 00:11:07.486 fused_ordering(215) 00:11:07.486 fused_ordering(216) 00:11:07.486 fused_ordering(217) 00:11:07.486 fused_ordering(218) 00:11:07.486 fused_ordering(219) 00:11:07.486 fused_ordering(220) 00:11:07.486 fused_ordering(221) 00:11:07.486 fused_ordering(222) 00:11:07.486 fused_ordering(223) 00:11:07.486 fused_ordering(224) 00:11:07.486 fused_ordering(225) 00:11:07.486 fused_ordering(226) 00:11:07.486 fused_ordering(227) 00:11:07.486 fused_ordering(228) 00:11:07.486 fused_ordering(229) 00:11:07.486 fused_ordering(230) 00:11:07.486 fused_ordering(231) 00:11:07.486 fused_ordering(232) 00:11:07.486 fused_ordering(233) 00:11:07.486 fused_ordering(234) 00:11:07.486 fused_ordering(235) 00:11:07.486 fused_ordering(236) 00:11:07.486 fused_ordering(237) 00:11:07.486 fused_ordering(238) 00:11:07.486 fused_ordering(239) 00:11:07.486 fused_ordering(240) 00:11:07.486 fused_ordering(241) 00:11:07.486 fused_ordering(242) 00:11:07.486 fused_ordering(243) 00:11:07.486 fused_ordering(244) 00:11:07.486 fused_ordering(245) 00:11:07.486 fused_ordering(246) 00:11:07.486 fused_ordering(247) 00:11:07.486 fused_ordering(248) 00:11:07.486 fused_ordering(249) 00:11:07.486 fused_ordering(250) 00:11:07.486 fused_ordering(251) 00:11:07.486 fused_ordering(252) 00:11:07.486 fused_ordering(253) 00:11:07.486 fused_ordering(254) 00:11:07.486 fused_ordering(255) 00:11:07.486 fused_ordering(256) 00:11:07.486 fused_ordering(257) 00:11:07.486 fused_ordering(258) 00:11:07.486 fused_ordering(259) 00:11:07.486 fused_ordering(260) 00:11:07.486 fused_ordering(261) 00:11:07.486 fused_ordering(262) 00:11:07.486 fused_ordering(263) 00:11:07.486 fused_ordering(264) 00:11:07.486 fused_ordering(265) 00:11:07.486 fused_ordering(266) 00:11:07.486 fused_ordering(267) 00:11:07.486 fused_ordering(268) 00:11:07.486 fused_ordering(269) 00:11:07.486 fused_ordering(270) 00:11:07.486 fused_ordering(271) 00:11:07.486 fused_ordering(272) 00:11:07.486 fused_ordering(273) 00:11:07.486 fused_ordering(274) 00:11:07.486 fused_ordering(275) 00:11:07.486 fused_ordering(276) 00:11:07.486 fused_ordering(277) 00:11:07.486 fused_ordering(278) 00:11:07.486 fused_ordering(279) 00:11:07.486 fused_ordering(280) 00:11:07.486 fused_ordering(281) 00:11:07.486 fused_ordering(282) 00:11:07.486 fused_ordering(283) 00:11:07.486 fused_ordering(284) 00:11:07.486 fused_ordering(285) 00:11:07.486 fused_ordering(286) 00:11:07.486 fused_ordering(287) 00:11:07.486 fused_ordering(288) 00:11:07.486 fused_ordering(289) 00:11:07.486 fused_ordering(290) 00:11:07.486 fused_ordering(291) 00:11:07.486 fused_ordering(292) 00:11:07.486 fused_ordering(293) 00:11:07.486 fused_ordering(294) 00:11:07.486 fused_ordering(295) 00:11:07.486 fused_ordering(296) 00:11:07.486 fused_ordering(297) 00:11:07.486 fused_ordering(298) 00:11:07.486 fused_ordering(299) 00:11:07.486 fused_ordering(300) 00:11:07.486 fused_ordering(301) 00:11:07.486 fused_ordering(302) 00:11:07.486 fused_ordering(303) 00:11:07.486 fused_ordering(304) 00:11:07.486 fused_ordering(305) 00:11:07.486 fused_ordering(306) 00:11:07.486 fused_ordering(307) 00:11:07.486 fused_ordering(308) 00:11:07.486 fused_ordering(309) 00:11:07.486 fused_ordering(310) 00:11:07.486 fused_ordering(311) 00:11:07.486 fused_ordering(312) 00:11:07.486 fused_ordering(313) 00:11:07.486 fused_ordering(314) 00:11:07.486 fused_ordering(315) 00:11:07.486 fused_ordering(316) 00:11:07.486 fused_ordering(317) 00:11:07.486 fused_ordering(318) 00:11:07.486 fused_ordering(319) 00:11:07.486 fused_ordering(320) 00:11:07.486 fused_ordering(321) 00:11:07.486 fused_ordering(322) 00:11:07.486 fused_ordering(323) 00:11:07.486 fused_ordering(324) 00:11:07.486 fused_ordering(325) 00:11:07.486 fused_ordering(326) 00:11:07.486 fused_ordering(327) 00:11:07.486 fused_ordering(328) 00:11:07.486 fused_ordering(329) 00:11:07.486 fused_ordering(330) 00:11:07.486 fused_ordering(331) 00:11:07.486 fused_ordering(332) 00:11:07.486 fused_ordering(333) 00:11:07.486 fused_ordering(334) 00:11:07.486 fused_ordering(335) 00:11:07.486 fused_ordering(336) 00:11:07.486 fused_ordering(337) 00:11:07.486 fused_ordering(338) 00:11:07.486 fused_ordering(339) 00:11:07.486 fused_ordering(340) 00:11:07.486 fused_ordering(341) 00:11:07.486 fused_ordering(342) 00:11:07.486 fused_ordering(343) 00:11:07.486 fused_ordering(344) 00:11:07.486 fused_ordering(345) 00:11:07.486 fused_ordering(346) 00:11:07.486 fused_ordering(347) 00:11:07.486 fused_ordering(348) 00:11:07.486 fused_ordering(349) 00:11:07.486 fused_ordering(350) 00:11:07.486 fused_ordering(351) 00:11:07.486 fused_ordering(352) 00:11:07.486 fused_ordering(353) 00:11:07.486 fused_ordering(354) 00:11:07.486 fused_ordering(355) 00:11:07.486 fused_ordering(356) 00:11:07.486 fused_ordering(357) 00:11:07.486 fused_ordering(358) 00:11:07.486 fused_ordering(359) 00:11:07.486 fused_ordering(360) 00:11:07.486 fused_ordering(361) 00:11:07.486 fused_ordering(362) 00:11:07.486 fused_ordering(363) 00:11:07.486 fused_ordering(364) 00:11:07.486 fused_ordering(365) 00:11:07.486 fused_ordering(366) 00:11:07.486 fused_ordering(367) 00:11:07.486 fused_ordering(368) 00:11:07.486 fused_ordering(369) 00:11:07.486 fused_ordering(370) 00:11:07.486 fused_ordering(371) 00:11:07.486 fused_ordering(372) 00:11:07.486 fused_ordering(373) 00:11:07.486 fused_ordering(374) 00:11:07.486 fused_ordering(375) 00:11:07.486 fused_ordering(376) 00:11:07.486 fused_ordering(377) 00:11:07.486 fused_ordering(378) 00:11:07.486 fused_ordering(379) 00:11:07.486 fused_ordering(380) 00:11:07.486 fused_ordering(381) 00:11:07.486 fused_ordering(382) 00:11:07.486 fused_ordering(383) 00:11:07.486 fused_ordering(384) 00:11:07.486 fused_ordering(385) 00:11:07.486 fused_ordering(386) 00:11:07.486 fused_ordering(387) 00:11:07.486 fused_ordering(388) 00:11:07.486 fused_ordering(389) 00:11:07.486 fused_ordering(390) 00:11:07.486 fused_ordering(391) 00:11:07.486 fused_ordering(392) 00:11:07.486 fused_ordering(393) 00:11:07.486 fused_ordering(394) 00:11:07.486 fused_ordering(395) 00:11:07.486 fused_ordering(396) 00:11:07.486 fused_ordering(397) 00:11:07.486 fused_ordering(398) 00:11:07.486 fused_ordering(399) 00:11:07.486 fused_ordering(400) 00:11:07.486 fused_ordering(401) 00:11:07.486 fused_ordering(402) 00:11:07.486 fused_ordering(403) 00:11:07.486 fused_ordering(404) 00:11:07.486 fused_ordering(405) 00:11:07.486 fused_ordering(406) 00:11:07.486 fused_ordering(407) 00:11:07.486 fused_ordering(408) 00:11:07.486 fused_ordering(409) 00:11:07.486 fused_ordering(410) 00:11:08.055 fused_ordering(411) 00:11:08.055 fused_ordering(412) 00:11:08.055 fused_ordering(413) 00:11:08.055 fused_ordering(414) 00:11:08.055 fused_ordering(415) 00:11:08.055 fused_ordering(416) 00:11:08.055 fused_ordering(417) 00:11:08.055 fused_ordering(418) 00:11:08.055 fused_ordering(419) 00:11:08.055 fused_ordering(420) 00:11:08.055 fused_ordering(421) 00:11:08.055 fused_ordering(422) 00:11:08.055 fused_ordering(423) 00:11:08.055 fused_ordering(424) 00:11:08.055 fused_ordering(425) 00:11:08.055 fused_ordering(426) 00:11:08.055 fused_ordering(427) 00:11:08.055 fused_ordering(428) 00:11:08.055 fused_ordering(429) 00:11:08.055 fused_ordering(430) 00:11:08.055 fused_ordering(431) 00:11:08.055 fused_ordering(432) 00:11:08.055 fused_ordering(433) 00:11:08.055 fused_ordering(434) 00:11:08.055 fused_ordering(435) 00:11:08.055 fused_ordering(436) 00:11:08.055 fused_ordering(437) 00:11:08.055 fused_ordering(438) 00:11:08.055 fused_ordering(439) 00:11:08.055 fused_ordering(440) 00:11:08.055 fused_ordering(441) 00:11:08.055 fused_ordering(442) 00:11:08.055 fused_ordering(443) 00:11:08.055 fused_ordering(444) 00:11:08.055 fused_ordering(445) 00:11:08.055 fused_ordering(446) 00:11:08.055 fused_ordering(447) 00:11:08.055 fused_ordering(448) 00:11:08.055 fused_ordering(449) 00:11:08.055 fused_ordering(450) 00:11:08.055 fused_ordering(451) 00:11:08.055 fused_ordering(452) 00:11:08.055 fused_ordering(453) 00:11:08.055 fused_ordering(454) 00:11:08.055 fused_ordering(455) 00:11:08.055 fused_ordering(456) 00:11:08.055 fused_ordering(457) 00:11:08.055 fused_ordering(458) 00:11:08.055 fused_ordering(459) 00:11:08.055 fused_ordering(460) 00:11:08.055 fused_ordering(461) 00:11:08.055 fused_ordering(462) 00:11:08.055 fused_ordering(463) 00:11:08.055 fused_ordering(464) 00:11:08.055 fused_ordering(465) 00:11:08.055 fused_ordering(466) 00:11:08.055 fused_ordering(467) 00:11:08.055 fused_ordering(468) 00:11:08.055 fused_ordering(469) 00:11:08.055 fused_ordering(470) 00:11:08.055 fused_ordering(471) 00:11:08.055 fused_ordering(472) 00:11:08.055 fused_ordering(473) 00:11:08.055 fused_ordering(474) 00:11:08.055 fused_ordering(475) 00:11:08.055 fused_ordering(476) 00:11:08.055 fused_ordering(477) 00:11:08.055 fused_ordering(478) 00:11:08.055 fused_ordering(479) 00:11:08.055 fused_ordering(480) 00:11:08.055 fused_ordering(481) 00:11:08.055 fused_ordering(482) 00:11:08.055 fused_ordering(483) 00:11:08.055 fused_ordering(484) 00:11:08.055 fused_ordering(485) 00:11:08.055 fused_ordering(486) 00:11:08.055 fused_ordering(487) 00:11:08.055 fused_ordering(488) 00:11:08.055 fused_ordering(489) 00:11:08.055 fused_ordering(490) 00:11:08.055 fused_ordering(491) 00:11:08.055 fused_ordering(492) 00:11:08.055 fused_ordering(493) 00:11:08.055 fused_ordering(494) 00:11:08.055 fused_ordering(495) 00:11:08.055 fused_ordering(496) 00:11:08.055 fused_ordering(497) 00:11:08.055 fused_ordering(498) 00:11:08.055 fused_ordering(499) 00:11:08.055 fused_ordering(500) 00:11:08.055 fused_ordering(501) 00:11:08.055 fused_ordering(502) 00:11:08.055 fused_ordering(503) 00:11:08.055 fused_ordering(504) 00:11:08.055 fused_ordering(505) 00:11:08.055 fused_ordering(506) 00:11:08.055 fused_ordering(507) 00:11:08.055 fused_ordering(508) 00:11:08.055 fused_ordering(509) 00:11:08.055 fused_ordering(510) 00:11:08.055 fused_ordering(511) 00:11:08.055 fused_ordering(512) 00:11:08.055 fused_ordering(513) 00:11:08.055 fused_ordering(514) 00:11:08.055 fused_ordering(515) 00:11:08.055 fused_ordering(516) 00:11:08.055 fused_ordering(517) 00:11:08.055 fused_ordering(518) 00:11:08.055 fused_ordering(519) 00:11:08.055 fused_ordering(520) 00:11:08.055 fused_ordering(521) 00:11:08.055 fused_ordering(522) 00:11:08.055 fused_ordering(523) 00:11:08.055 fused_ordering(524) 00:11:08.055 fused_ordering(525) 00:11:08.055 fused_ordering(526) 00:11:08.055 fused_ordering(527) 00:11:08.055 fused_ordering(528) 00:11:08.055 fused_ordering(529) 00:11:08.055 fused_ordering(530) 00:11:08.055 fused_ordering(531) 00:11:08.055 fused_ordering(532) 00:11:08.055 fused_ordering(533) 00:11:08.055 fused_ordering(534) 00:11:08.055 fused_ordering(535) 00:11:08.055 fused_ordering(536) 00:11:08.055 fused_ordering(537) 00:11:08.055 fused_ordering(538) 00:11:08.055 fused_ordering(539) 00:11:08.055 fused_ordering(540) 00:11:08.055 fused_ordering(541) 00:11:08.055 fused_ordering(542) 00:11:08.055 fused_ordering(543) 00:11:08.055 fused_ordering(544) 00:11:08.055 fused_ordering(545) 00:11:08.055 fused_ordering(546) 00:11:08.055 fused_ordering(547) 00:11:08.055 fused_ordering(548) 00:11:08.055 fused_ordering(549) 00:11:08.055 fused_ordering(550) 00:11:08.055 fused_ordering(551) 00:11:08.055 fused_ordering(552) 00:11:08.055 fused_ordering(553) 00:11:08.055 fused_ordering(554) 00:11:08.055 fused_ordering(555) 00:11:08.055 fused_ordering(556) 00:11:08.055 fused_ordering(557) 00:11:08.055 fused_ordering(558) 00:11:08.055 fused_ordering(559) 00:11:08.055 fused_ordering(560) 00:11:08.055 fused_ordering(561) 00:11:08.055 fused_ordering(562) 00:11:08.055 fused_ordering(563) 00:11:08.055 fused_ordering(564) 00:11:08.055 fused_ordering(565) 00:11:08.055 fused_ordering(566) 00:11:08.055 fused_ordering(567) 00:11:08.055 fused_ordering(568) 00:11:08.055 fused_ordering(569) 00:11:08.055 fused_ordering(570) 00:11:08.055 fused_ordering(571) 00:11:08.055 fused_ordering(572) 00:11:08.055 fused_ordering(573) 00:11:08.055 fused_ordering(574) 00:11:08.055 fused_ordering(575) 00:11:08.055 fused_ordering(576) 00:11:08.055 fused_ordering(577) 00:11:08.055 fused_ordering(578) 00:11:08.055 fused_ordering(579) 00:11:08.055 fused_ordering(580) 00:11:08.055 fused_ordering(581) 00:11:08.055 fused_ordering(582) 00:11:08.055 fused_ordering(583) 00:11:08.055 fused_ordering(584) 00:11:08.055 fused_ordering(585) 00:11:08.055 fused_ordering(586) 00:11:08.055 fused_ordering(587) 00:11:08.055 fused_ordering(588) 00:11:08.055 fused_ordering(589) 00:11:08.055 fused_ordering(590) 00:11:08.055 fused_ordering(591) 00:11:08.055 fused_ordering(592) 00:11:08.055 fused_ordering(593) 00:11:08.055 fused_ordering(594) 00:11:08.055 fused_ordering(595) 00:11:08.055 fused_ordering(596) 00:11:08.055 fused_ordering(597) 00:11:08.055 fused_ordering(598) 00:11:08.055 fused_ordering(599) 00:11:08.055 fused_ordering(600) 00:11:08.055 fused_ordering(601) 00:11:08.055 fused_ordering(602) 00:11:08.055 fused_ordering(603) 00:11:08.055 fused_ordering(604) 00:11:08.055 fused_ordering(605) 00:11:08.055 fused_ordering(606) 00:11:08.055 fused_ordering(607) 00:11:08.055 fused_ordering(608) 00:11:08.055 fused_ordering(609) 00:11:08.055 fused_ordering(610) 00:11:08.055 fused_ordering(611) 00:11:08.055 fused_ordering(612) 00:11:08.055 fused_ordering(613) 00:11:08.055 fused_ordering(614) 00:11:08.055 fused_ordering(615) 00:11:08.621 fused_ordering(616) 00:11:08.621 fused_ordering(617) 00:11:08.621 fused_ordering(618) 00:11:08.621 fused_ordering(619) 00:11:08.621 fused_ordering(620) 00:11:08.621 fused_ordering(621) 00:11:08.621 fused_ordering(622) 00:11:08.621 fused_ordering(623) 00:11:08.621 fused_ordering(624) 00:11:08.621 fused_ordering(625) 00:11:08.621 fused_ordering(626) 00:11:08.621 fused_ordering(627) 00:11:08.621 fused_ordering(628) 00:11:08.621 fused_ordering(629) 00:11:08.621 fused_ordering(630) 00:11:08.621 fused_ordering(631) 00:11:08.621 fused_ordering(632) 00:11:08.621 fused_ordering(633) 00:11:08.621 fused_ordering(634) 00:11:08.621 fused_ordering(635) 00:11:08.621 fused_ordering(636) 00:11:08.621 fused_ordering(637) 00:11:08.621 fused_ordering(638) 00:11:08.621 fused_ordering(639) 00:11:08.621 fused_ordering(640) 00:11:08.621 fused_ordering(641) 00:11:08.621 fused_ordering(642) 00:11:08.621 fused_ordering(643) 00:11:08.621 fused_ordering(644) 00:11:08.621 fused_ordering(645) 00:11:08.621 fused_ordering(646) 00:11:08.621 fused_ordering(647) 00:11:08.621 fused_ordering(648) 00:11:08.621 fused_ordering(649) 00:11:08.621 fused_ordering(650) 00:11:08.621 fused_ordering(651) 00:11:08.621 fused_ordering(652) 00:11:08.621 fused_ordering(653) 00:11:08.621 fused_ordering(654) 00:11:08.621 fused_ordering(655) 00:11:08.621 fused_ordering(656) 00:11:08.621 fused_ordering(657) 00:11:08.621 fused_ordering(658) 00:11:08.621 fused_ordering(659) 00:11:08.621 fused_ordering(660) 00:11:08.621 fused_ordering(661) 00:11:08.621 fused_ordering(662) 00:11:08.621 fused_ordering(663) 00:11:08.621 fused_ordering(664) 00:11:08.621 fused_ordering(665) 00:11:08.621 fused_ordering(666) 00:11:08.621 fused_ordering(667) 00:11:08.621 fused_ordering(668) 00:11:08.621 fused_ordering(669) 00:11:08.621 fused_ordering(670) 00:11:08.621 fused_ordering(671) 00:11:08.621 fused_ordering(672) 00:11:08.621 fused_ordering(673) 00:11:08.621 fused_ordering(674) 00:11:08.621 fused_ordering(675) 00:11:08.621 fused_ordering(676) 00:11:08.621 fused_ordering(677) 00:11:08.621 fused_ordering(678) 00:11:08.621 fused_ordering(679) 00:11:08.622 fused_ordering(680) 00:11:08.622 fused_ordering(681) 00:11:08.622 fused_ordering(682) 00:11:08.622 fused_ordering(683) 00:11:08.622 fused_ordering(684) 00:11:08.622 fused_ordering(685) 00:11:08.622 fused_ordering(686) 00:11:08.622 fused_ordering(687) 00:11:08.622 fused_ordering(688) 00:11:08.622 fused_ordering(689) 00:11:08.622 fused_ordering(690) 00:11:08.622 fused_ordering(691) 00:11:08.622 fused_ordering(692) 00:11:08.622 fused_ordering(693) 00:11:08.622 fused_ordering(694) 00:11:08.622 fused_ordering(695) 00:11:08.622 fused_ordering(696) 00:11:08.622 fused_ordering(697) 00:11:08.622 fused_ordering(698) 00:11:08.622 fused_ordering(699) 00:11:08.622 fused_ordering(700) 00:11:08.622 fused_ordering(701) 00:11:08.622 fused_ordering(702) 00:11:08.622 fused_ordering(703) 00:11:08.622 fused_ordering(704) 00:11:08.622 fused_ordering(705) 00:11:08.622 fused_ordering(706) 00:11:08.622 fused_ordering(707) 00:11:08.622 fused_ordering(708) 00:11:08.622 fused_ordering(709) 00:11:08.622 fused_ordering(710) 00:11:08.622 fused_ordering(711) 00:11:08.622 fused_ordering(712) 00:11:08.622 fused_ordering(713) 00:11:08.622 fused_ordering(714) 00:11:08.622 fused_ordering(715) 00:11:08.622 fused_ordering(716) 00:11:08.622 fused_ordering(717) 00:11:08.622 fused_ordering(718) 00:11:08.622 fused_ordering(719) 00:11:08.622 fused_ordering(720) 00:11:08.622 fused_ordering(721) 00:11:08.622 fused_ordering(722) 00:11:08.622 fused_ordering(723) 00:11:08.622 fused_ordering(724) 00:11:08.622 fused_ordering(725) 00:11:08.622 fused_ordering(726) 00:11:08.622 fused_ordering(727) 00:11:08.622 fused_ordering(728) 00:11:08.622 fused_ordering(729) 00:11:08.622 fused_ordering(730) 00:11:08.622 fused_ordering(731) 00:11:08.622 fused_ordering(732) 00:11:08.622 fused_ordering(733) 00:11:08.622 fused_ordering(734) 00:11:08.622 fused_ordering(735) 00:11:08.622 fused_ordering(736) 00:11:08.622 fused_ordering(737) 00:11:08.622 fused_ordering(738) 00:11:08.622 fused_ordering(739) 00:11:08.622 fused_ordering(740) 00:11:08.622 fused_ordering(741) 00:11:08.622 fused_ordering(742) 00:11:08.622 fused_ordering(743) 00:11:08.622 fused_ordering(744) 00:11:08.622 fused_ordering(745) 00:11:08.622 fused_ordering(746) 00:11:08.622 fused_ordering(747) 00:11:08.622 fused_ordering(748) 00:11:08.622 fused_ordering(749) 00:11:08.622 fused_ordering(750) 00:11:08.622 fused_ordering(751) 00:11:08.622 fused_ordering(752) 00:11:08.622 fused_ordering(753) 00:11:08.622 fused_ordering(754) 00:11:08.622 fused_ordering(755) 00:11:08.622 fused_ordering(756) 00:11:08.622 fused_ordering(757) 00:11:08.622 fused_ordering(758) 00:11:08.622 fused_ordering(759) 00:11:08.622 fused_ordering(760) 00:11:08.622 fused_ordering(761) 00:11:08.622 fused_ordering(762) 00:11:08.622 fused_ordering(763) 00:11:08.622 fused_ordering(764) 00:11:08.622 fused_ordering(765) 00:11:08.622 fused_ordering(766) 00:11:08.622 fused_ordering(767) 00:11:08.622 fused_ordering(768) 00:11:08.622 fused_ordering(769) 00:11:08.622 fused_ordering(770) 00:11:08.622 fused_ordering(771) 00:11:08.622 fused_ordering(772) 00:11:08.622 fused_ordering(773) 00:11:08.622 fused_ordering(774) 00:11:08.622 fused_ordering(775) 00:11:08.622 fused_ordering(776) 00:11:08.622 fused_ordering(777) 00:11:08.622 fused_ordering(778) 00:11:08.622 fused_ordering(779) 00:11:08.622 fused_ordering(780) 00:11:08.622 fused_ordering(781) 00:11:08.622 fused_ordering(782) 00:11:08.622 fused_ordering(783) 00:11:08.622 fused_ordering(784) 00:11:08.622 fused_ordering(785) 00:11:08.622 fused_ordering(786) 00:11:08.622 fused_ordering(787) 00:11:08.622 fused_ordering(788) 00:11:08.622 fused_ordering(789) 00:11:08.622 fused_ordering(790) 00:11:08.622 fused_ordering(791) 00:11:08.622 fused_ordering(792) 00:11:08.622 fused_ordering(793) 00:11:08.622 fused_ordering(794) 00:11:08.622 fused_ordering(795) 00:11:08.622 fused_ordering(796) 00:11:08.622 fused_ordering(797) 00:11:08.622 fused_ordering(798) 00:11:08.622 fused_ordering(799) 00:11:08.622 fused_ordering(800) 00:11:08.622 fused_ordering(801) 00:11:08.622 fused_ordering(802) 00:11:08.622 fused_ordering(803) 00:11:08.622 fused_ordering(804) 00:11:08.622 fused_ordering(805) 00:11:08.622 fused_ordering(806) 00:11:08.622 fused_ordering(807) 00:11:08.622 fused_ordering(808) 00:11:08.622 fused_ordering(809) 00:11:08.622 fused_ordering(810) 00:11:08.622 fused_ordering(811) 00:11:08.622 fused_ordering(812) 00:11:08.622 fused_ordering(813) 00:11:08.622 fused_ordering(814) 00:11:08.622 fused_ordering(815) 00:11:08.622 fused_ordering(816) 00:11:08.622 fused_ordering(817) 00:11:08.622 fused_ordering(818) 00:11:08.622 fused_ordering(819) 00:11:08.622 fused_ordering(820) 00:11:09.555 fused_ordering(821) 00:11:09.555 fused_ordering(822) 00:11:09.555 fused_ordering(823) 00:11:09.555 fused_ordering(824) 00:11:09.555 fused_ordering(825) 00:11:09.555 fused_ordering(826) 00:11:09.555 fused_ordering(827) 00:11:09.555 fused_ordering(828) 00:11:09.555 fused_ordering(829) 00:11:09.555 fused_ordering(830) 00:11:09.555 fused_ordering(831) 00:11:09.555 fused_ordering(832) 00:11:09.555 fused_ordering(833) 00:11:09.555 fused_ordering(834) 00:11:09.555 fused_ordering(835) 00:11:09.555 fused_ordering(836) 00:11:09.555 fused_ordering(837) 00:11:09.555 fused_ordering(838) 00:11:09.555 fused_ordering(839) 00:11:09.555 fused_ordering(840) 00:11:09.555 fused_ordering(841) 00:11:09.555 fused_ordering(842) 00:11:09.555 fused_ordering(843) 00:11:09.555 fused_ordering(844) 00:11:09.555 fused_ordering(845) 00:11:09.555 fused_ordering(846) 00:11:09.555 fused_ordering(847) 00:11:09.555 fused_ordering(848) 00:11:09.555 fused_ordering(849) 00:11:09.555 fused_ordering(850) 00:11:09.555 fused_ordering(851) 00:11:09.555 fused_ordering(852) 00:11:09.555 fused_ordering(853) 00:11:09.555 fused_ordering(854) 00:11:09.555 fused_ordering(855) 00:11:09.555 fused_ordering(856) 00:11:09.555 fused_ordering(857) 00:11:09.555 fused_ordering(858) 00:11:09.555 fused_ordering(859) 00:11:09.555 fused_ordering(860) 00:11:09.555 fused_ordering(861) 00:11:09.555 fused_ordering(862) 00:11:09.555 fused_ordering(863) 00:11:09.555 fused_ordering(864) 00:11:09.555 fused_ordering(865) 00:11:09.555 fused_ordering(866) 00:11:09.555 fused_ordering(867) 00:11:09.555 fused_ordering(868) 00:11:09.555 fused_ordering(869) 00:11:09.555 fused_ordering(870) 00:11:09.555 fused_ordering(871) 00:11:09.555 fused_ordering(872) 00:11:09.555 fused_ordering(873) 00:11:09.555 fused_ordering(874) 00:11:09.555 fused_ordering(875) 00:11:09.555 fused_ordering(876) 00:11:09.555 fused_ordering(877) 00:11:09.555 fused_ordering(878) 00:11:09.555 fused_ordering(879) 00:11:09.555 fused_ordering(880) 00:11:09.556 fused_ordering(881) 00:11:09.556 fused_ordering(882) 00:11:09.556 fused_ordering(883) 00:11:09.556 fused_ordering(884) 00:11:09.556 fused_ordering(885) 00:11:09.556 fused_ordering(886) 00:11:09.556 fused_ordering(887) 00:11:09.556 fused_ordering(888) 00:11:09.556 fused_ordering(889) 00:11:09.556 fused_ordering(890) 00:11:09.556 fused_ordering(891) 00:11:09.556 fused_ordering(892) 00:11:09.556 fused_ordering(893) 00:11:09.556 fused_ordering(894) 00:11:09.556 fused_ordering(895) 00:11:09.556 fused_ordering(896) 00:11:09.556 fused_ordering(897) 00:11:09.556 fused_ordering(898) 00:11:09.556 fused_ordering(899) 00:11:09.556 fused_ordering(900) 00:11:09.556 fused_ordering(901) 00:11:09.556 fused_ordering(902) 00:11:09.556 fused_ordering(903) 00:11:09.556 fused_ordering(904) 00:11:09.556 fused_ordering(905) 00:11:09.556 fused_ordering(906) 00:11:09.556 fused_ordering(907) 00:11:09.556 fused_ordering(908) 00:11:09.556 fused_ordering(909) 00:11:09.556 fused_ordering(910) 00:11:09.556 fused_ordering(911) 00:11:09.556 fused_ordering(912) 00:11:09.556 fused_ordering(913) 00:11:09.556 fused_ordering(914) 00:11:09.556 fused_ordering(915) 00:11:09.556 fused_ordering(916) 00:11:09.556 fused_ordering(917) 00:11:09.556 fused_ordering(918) 00:11:09.556 fused_ordering(919) 00:11:09.556 fused_ordering(920) 00:11:09.556 fused_ordering(921) 00:11:09.556 fused_ordering(922) 00:11:09.556 fused_ordering(923) 00:11:09.556 fused_ordering(924) 00:11:09.556 fused_ordering(925) 00:11:09.556 fused_ordering(926) 00:11:09.556 fused_ordering(927) 00:11:09.556 fused_ordering(928) 00:11:09.556 fused_ordering(929) 00:11:09.556 fused_ordering(930) 00:11:09.556 fused_ordering(931) 00:11:09.556 fused_ordering(932) 00:11:09.556 fused_ordering(933) 00:11:09.556 fused_ordering(934) 00:11:09.556 fused_ordering(935) 00:11:09.556 fused_ordering(936) 00:11:09.556 fused_ordering(937) 00:11:09.556 fused_ordering(938) 00:11:09.556 fused_ordering(939) 00:11:09.556 fused_ordering(940) 00:11:09.556 fused_ordering(941) 00:11:09.556 fused_ordering(942) 00:11:09.556 fused_ordering(943) 00:11:09.556 fused_ordering(944) 00:11:09.556 fused_ordering(945) 00:11:09.556 fused_ordering(946) 00:11:09.556 fused_ordering(947) 00:11:09.556 fused_ordering(948) 00:11:09.556 fused_ordering(949) 00:11:09.556 fused_ordering(950) 00:11:09.556 fused_ordering(951) 00:11:09.556 fused_ordering(952) 00:11:09.556 fused_ordering(953) 00:11:09.556 fused_ordering(954) 00:11:09.556 fused_ordering(955) 00:11:09.556 fused_ordering(956) 00:11:09.556 fused_ordering(957) 00:11:09.556 fused_ordering(958) 00:11:09.556 fused_ordering(959) 00:11:09.556 fused_ordering(960) 00:11:09.556 fused_ordering(961) 00:11:09.556 fused_ordering(962) 00:11:09.556 fused_ordering(963) 00:11:09.556 fused_ordering(964) 00:11:09.556 fused_ordering(965) 00:11:09.556 fused_ordering(966) 00:11:09.556 fused_ordering(967) 00:11:09.556 fused_ordering(968) 00:11:09.556 fused_ordering(969) 00:11:09.556 fused_ordering(970) 00:11:09.556 fused_ordering(971) 00:11:09.556 fused_ordering(972) 00:11:09.556 fused_ordering(973) 00:11:09.556 fused_ordering(974) 00:11:09.556 fused_ordering(975) 00:11:09.556 fused_ordering(976) 00:11:09.556 fused_ordering(977) 00:11:09.556 fused_ordering(978) 00:11:09.556 fused_ordering(979) 00:11:09.556 fused_ordering(980) 00:11:09.556 fused_ordering(981) 00:11:09.556 fused_ordering(982) 00:11:09.556 fused_ordering(983) 00:11:09.556 fused_ordering(984) 00:11:09.556 fused_ordering(985) 00:11:09.556 fused_ordering(986) 00:11:09.556 fused_ordering(987) 00:11:09.556 fused_ordering(988) 00:11:09.556 fused_ordering(989) 00:11:09.556 fused_ordering(990) 00:11:09.556 fused_ordering(991) 00:11:09.556 fused_ordering(992) 00:11:09.556 fused_ordering(993) 00:11:09.556 fused_ordering(994) 00:11:09.556 fused_ordering(995) 00:11:09.556 fused_ordering(996) 00:11:09.556 fused_ordering(997) 00:11:09.556 fused_ordering(998) 00:11:09.556 fused_ordering(999) 00:11:09.556 fused_ordering(1000) 00:11:09.556 fused_ordering(1001) 00:11:09.556 fused_ordering(1002) 00:11:09.556 fused_ordering(1003) 00:11:09.556 fused_ordering(1004) 00:11:09.556 fused_ordering(1005) 00:11:09.556 fused_ordering(1006) 00:11:09.556 fused_ordering(1007) 00:11:09.556 fused_ordering(1008) 00:11:09.556 fused_ordering(1009) 00:11:09.556 fused_ordering(1010) 00:11:09.556 fused_ordering(1011) 00:11:09.556 fused_ordering(1012) 00:11:09.556 fused_ordering(1013) 00:11:09.556 fused_ordering(1014) 00:11:09.556 fused_ordering(1015) 00:11:09.556 fused_ordering(1016) 00:11:09.556 fused_ordering(1017) 00:11:09.556 fused_ordering(1018) 00:11:09.556 fused_ordering(1019) 00:11:09.556 fused_ordering(1020) 00:11:09.556 fused_ordering(1021) 00:11:09.556 fused_ordering(1022) 00:11:09.556 fused_ordering(1023) 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.556 rmmod nvme_tcp 00:11:09.556 rmmod nvme_fabrics 00:11:09.556 rmmod nvme_keyring 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4130790 ']' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4130790 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4130790 ']' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4130790 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4130790 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4130790' 00:11:09.556 killing process with pid 4130790 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4130790 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4130790 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.556 20:09:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.086 20:09:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.086 00:11:12.086 real 0m10.695s 00:11:12.086 user 0m5.893s 00:11:12.086 sys 0m5.613s 00:11:12.086 20:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.086 20:09:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:12.086 ************************************ 00:11:12.086 END TEST nvmf_fused_ordering 00:11:12.086 ************************************ 00:11:12.086 20:09:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:12.086 20:09:36 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:12.086 20:09:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.086 20:09:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.086 20:09:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:12.086 ************************************ 00:11:12.086 START TEST nvmf_delete_subsystem 00:11:12.086 ************************************ 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:12.086 * Looking for test storage... 00:11:12.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.086 20:09:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.363 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:17.364 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:17.364 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:17.364 Found net devices under 0000:af:00.0: cvl_0_0 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:17.364 Found net devices under 0000:af:00.1: cvl_0_1 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:17.364 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:17.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:11:17.624 00:11:17.624 --- 10.0.0.2 ping statistics --- 00:11:17.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.624 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:17.624 00:11:17.624 --- 10.0.0.1 ping statistics --- 00:11:17.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.624 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4134949 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4134949 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4134949 ']' 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.624 20:09:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.624 [2024-07-15 20:09:42.882763] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:11:17.624 [2024-07-15 20:09:42.882818] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.624 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.624 [2024-07-15 20:09:42.968916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.883 [2024-07-15 20:09:43.056088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.883 [2024-07-15 20:09:43.056133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.883 [2024-07-15 20:09:43.056145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.883 [2024-07-15 20:09:43.056155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.883 [2024-07-15 20:09:43.056163] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.883 [2024-07-15 20:09:43.056211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.883 [2024-07-15 20:09:43.056215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.883 [2024-07-15 20:09:43.205200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.883 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.884 [2024-07-15 20:09:43.221367] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.884 NULL1 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.884 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 Delay0 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4135086 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:18.141 20:09:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.141 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.141 [2024-07-15 20:09:43.296050] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:20.043 20:09:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.043 20:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.043 20:09:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 [2024-07-15 20:09:45.549644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c4000 is same with the state(5) to be set 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 starting I/O failed: -6 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 [2024-07-15 20:09:45.551625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ed4000c00 is same with the state(5) to be set 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Write completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.304 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 Read completed with error (sct=0, sc=8) 00:11:20.305 Write completed with error (sct=0, sc=8) 00:11:20.305 [2024-07-15 20:09:45.552358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ed400d020 is same with the state(5) to be set 00:11:21.242 [2024-07-15 20:09:46.515739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a2500 is same with the state(5) to be set 00:11:21.242 Write completed with error (sct=0, sc=8) 00:11:21.242 Write completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Write completed with error (sct=0, sc=8) 00:11:21.242 Write completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Write completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Write completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 [2024-07-15 20:09:46.551852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4ed400d370 is same with the state(5) to be set 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.242 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 [2024-07-15 20:09:46.554132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2e70 is same with the state(5) to be set 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 [2024-07-15 20:09:46.555071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6650 is same with the state(5) to be set 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Write completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 Read completed with error (sct=0, sc=8) 00:11:21.243 [2024-07-15 20:09:46.555201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c3cb0 is same with the state(5) to be set 00:11:21.243 Initializing NVMe Controllers 00:11:21.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.243 Controller IO queue size 128, less than required. 00:11:21.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.243 Initialization complete. Launching workers. 00:11:21.243 ======================================================== 00:11:21.243 Latency(us) 00:11:21.243 Device Information : IOPS MiB/s Average min max 00:11:21.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.45 0.09 955406.70 503.35 1016896.30 00:11:21.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.74 0.07 903038.77 375.87 1044889.56 00:11:21.243 ======================================================== 00:11:21.243 Total : 334.19 0.16 931942.14 375.87 1044889.56 00:11:21.243 00:11:21.243 [2024-07-15 20:09:46.555710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a2500 (9): Bad file descriptor 00:11:21.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:21.243 20:09:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.243 20:09:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:21.243 20:09:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4135086 00:11:21.243 20:09:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4135086 00:11:21.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4135086) - No such process 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4135086 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4135086 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4135086 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.827 [2024-07-15 20:09:47.079968] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.827 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4135700 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:21.828 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.828 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.828 [2024-07-15 20:09:47.139507] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:22.484 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.484 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:22.484 20:09:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.053 20:09:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.053 20:09:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:23.053 20:09:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.312 20:09:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.312 20:09:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:23.312 20:09:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.880 20:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.880 20:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:23.880 20:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.448 20:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.448 20:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:24.448 20:09:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.016 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.016 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:25.016 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.274 Initializing NVMe Controllers 00:11:25.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.274 Controller IO queue size 128, less than required. 00:11:25.274 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.274 Initialization complete. Launching workers. 00:11:25.274 ======================================================== 00:11:25.274 Latency(us) 00:11:25.274 Device Information : IOPS MiB/s Average min max 00:11:25.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004775.65 1000155.29 1041759.29 00:11:25.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007129.77 1000287.89 1042151.93 00:11:25.274 ======================================================== 00:11:25.274 Total : 256.00 0.12 1005952.71 1000155.29 1042151.93 00:11:25.274 00:11:25.274 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.274 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4135700 00:11:25.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4135700) - No such process 00:11:25.275 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4135700 00:11:25.275 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.534 rmmod nvme_tcp 00:11:25.534 rmmod nvme_fabrics 00:11:25.534 rmmod nvme_keyring 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4134949 ']' 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4134949 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4134949 ']' 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4134949 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4134949 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4134949' 00:11:25.534 killing process with pid 4134949 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4134949 00:11:25.534 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4134949 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.793 20:09:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.699 20:09:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.699 00:11:27.699 real 0m15.986s 00:11:27.699 user 0m29.712s 00:11:27.699 sys 0m5.137s 00:11:27.699 20:09:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.699 20:09:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.699 ************************************ 00:11:27.699 END TEST nvmf_delete_subsystem 00:11:27.699 ************************************ 00:11:27.699 20:09:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:27.699 20:09:53 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:27.699 20:09:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.699 20:09:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.699 20:09:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.959 ************************************ 00:11:27.959 START TEST nvmf_ns_masking 00:11:27.959 ************************************ 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:27.959 * Looking for test storage... 00:11:27.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=096bb008-d0e5-4f97-b7e1-1e428473b882 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a2fd9bb5-fdaf-4ea7-a560-d85dd27c18e7 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fb858e0d-918b-4c40-888e-e1747c5e97a6 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.959 20:09:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:34.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:34.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:34.525 Found net devices under 0000:af:00.0: cvl_0_0 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:34.525 Found net devices under 0000:af:00.1: cvl_0_1 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:11:34.525 00:11:34.525 --- 10.0.0.2 ping statistics --- 00:11:34.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.525 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:11:34.525 00:11:34.525 --- 10.0.0.1 ping statistics --- 00:11:34.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.525 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.525 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4140021 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4140021 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4140021 ']' 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.526 20:09:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:34.526 [2024-07-15 20:09:59.026738] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:11:34.526 [2024-07-15 20:09:59.026799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.526 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.526 [2024-07-15 20:09:59.116197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.526 [2024-07-15 20:09:59.210839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.526 [2024-07-15 20:09:59.210878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.526 [2024-07-15 20:09:59.210888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.526 [2024-07-15 20:09:59.210897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.526 [2024-07-15 20:09:59.210904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.526 [2024-07-15 20:09:59.210926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.784 20:09:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:35.042 [2024-07-15 20:10:00.225597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.042 20:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:35.042 20:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:35.043 20:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:35.301 Malloc1 00:11:35.301 20:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:35.560 Malloc2 00:11:35.560 20:10:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.818 20:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:36.076 20:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.334 [2024-07-15 20:10:01.491958] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb858e0d-918b-4c40-888e-e1747c5e97a6 -a 10.0.0.2 -s 4420 -i 4 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:36.334 20:10:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.868 [ 0]:0x1 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ed66de606964002a25885c4a65e8fa9 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ed66de606964002a25885c4a65e8fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.868 20:10:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.868 [ 0]:0x1 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ed66de606964002a25885c4a65e8fa9 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ed66de606964002a25885c4a65e8fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.868 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.869 [ 1]:0x2 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:38.869 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.127 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.386 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb858e0d-918b-4c40-888e-e1747c5e97a6 -a 10.0.0.2 -s 4420 -i 4 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:39.645 20:10:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:42.181 20:10:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.181 [ 0]:0x2 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.181 [ 0]:0x1 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.181 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ed66de606964002a25885c4a65e8fa9 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ed66de606964002a25885c4a65e8fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.440 [ 1]:0x2 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.440 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.699 [ 0]:0x2 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:42.699 20:10:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.699 20:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:42.959 20:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:42.959 20:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb858e0d-918b-4c40-888e-e1747c5e97a6 -a 10.0.0.2 -s 4420 -i 4 00:11:43.219 20:10:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:43.219 20:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:43.219 20:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.219 20:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:43.219 20:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:43.219 20:10:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.125 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:45.385 [ 0]:0x1 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6ed66de606964002a25885c4a65e8fa9 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6ed66de606964002a25885c4a65e8fa9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:45.385 [ 1]:0x2 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.385 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:45.644 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.645 [ 0]:0x2 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:45.645 20:10:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:45.904 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:45.904 [2024-07-15 20:10:11.237846] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:45.904 request: 00:11:45.904 { 00:11:45.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:45.904 "nsid": 2, 00:11:45.904 "host": "nqn.2016-06.io.spdk:host1", 00:11:45.904 "method": "nvmf_ns_remove_host", 00:11:45.904 "req_id": 1 00:11:45.904 } 00:11:45.904 Got JSON-RPC error response 00:11:45.904 response: 00:11:45.904 { 00:11:45.904 "code": -32602, 00:11:45.904 "message": "Invalid parameters" 00:11:45.904 } 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:46.162 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:46.163 [ 0]:0x2 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49a237b012364e34b6f1a955411c4150 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49a237b012364e34b6f1a955411c4150 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4142404 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4142404 /var/tmp/host.sock 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4142404 ']' 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.163 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:46.163 [2024-07-15 20:10:11.473972] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:11:46.163 [2024-07-15 20:10:11.474029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142404 ] 00:11:46.163 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.421 [2024-07-15 20:10:11.545129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.421 [2024-07-15 20:10:11.635239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.679 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.679 20:10:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:46.679 20:10:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.938 20:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:47.197 20:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 096bb008-d0e5-4f97-b7e1-1e428473b882 00:11:47.197 20:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:47.197 20:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 096BB008D0E54F97B7E11E428473B882 -i 00:11:47.456 20:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a2fd9bb5-fdaf-4ea7-a560-d85dd27c18e7 00:11:47.456 20:10:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:47.456 20:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A2FD9BB5FDAF4EA7A560D85DD27C18E7 -i 00:11:47.715 20:10:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:47.974 20:10:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:47.974 20:10:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:47.974 20:10:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:48.542 nvme0n1 00:11:48.542 20:10:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:48.542 20:10:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:48.800 nvme1n2 00:11:48.800 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:48.800 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:48.800 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:48.800 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:48.800 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:49.059 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:49.059 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:49.059 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:49.059 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:49.318 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 096bb008-d0e5-4f97-b7e1-1e428473b882 == \0\9\6\b\b\0\0\8\-\d\0\e\5\-\4\f\9\7\-\b\7\e\1\-\1\e\4\2\8\4\7\3\b\8\8\2 ]] 00:11:49.318 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:49.318 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:49.318 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a2fd9bb5-fdaf-4ea7-a560-d85dd27c18e7 == \a\2\f\d\9\b\b\5\-\f\d\a\f\-\4\e\a\7\-\a\5\6\0\-\d\8\5\d\d\2\7\c\1\8\e\7 ]] 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4142404 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4142404 ']' 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4142404 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.577 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4142404 00:11:49.836 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:49.836 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:49.836 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4142404' 00:11:49.836 killing process with pid 4142404 00:11:49.836 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4142404 00:11:49.836 20:10:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4142404 00:11:50.094 20:10:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.354 rmmod nvme_tcp 00:11:50.354 rmmod nvme_fabrics 00:11:50.354 rmmod nvme_keyring 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4140021 ']' 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4140021 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4140021 ']' 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4140021 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4140021 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4140021' 00:11:50.354 killing process with pid 4140021 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4140021 00:11:50.354 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4140021 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.614 20:10:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.178 20:10:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:53.178 00:11:53.178 real 0m24.874s 00:11:53.178 user 0m28.574s 00:11:53.178 sys 0m6.635s 00:11:53.178 20:10:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.178 20:10:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.178 ************************************ 00:11:53.178 END TEST nvmf_ns_masking 00:11:53.178 ************************************ 00:11:53.178 20:10:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:53.178 20:10:17 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:53.178 20:10:17 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:53.178 20:10:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:53.178 20:10:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.178 20:10:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:53.178 ************************************ 00:11:53.178 START TEST nvmf_nvme_cli 00:11:53.178 ************************************ 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:53.178 * Looking for test storage... 00:11:53.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.178 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:53.179 20:10:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.448 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:58.449 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:58.449 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:58.449 Found net devices under 0000:af:00.0: cvl_0_0 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:58.449 Found net devices under 0000:af:00.1: cvl_0_1 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:11:58.449 00:11:58.449 --- 10.0.0.2 ping statistics --- 00:11:58.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.449 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:58.449 00:11:58.449 --- 10.0.0.1 ping statistics --- 00:11:58.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.449 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.449 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4146786 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4146786 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4146786 ']' 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.706 20:10:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.706 [2024-07-15 20:10:23.878068] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:11:58.706 [2024-07-15 20:10:23.878123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.706 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.706 [2024-07-15 20:10:23.964532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.965 [2024-07-15 20:10:24.059241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.965 [2024-07-15 20:10:24.059292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.965 [2024-07-15 20:10:24.059302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.965 [2024-07-15 20:10:24.059311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.965 [2024-07-15 20:10:24.059319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.965 [2024-07-15 20:10:24.059366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.965 [2024-07-15 20:10:24.059484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.965 [2024-07-15 20:10:24.059574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.965 [2024-07-15 20:10:24.059576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.533 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.534 [2024-07-15 20:10:24.872879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.534 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 Malloc0 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 Malloc1 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 [2024-07-15 20:10:24.955592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.793 20:10:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:59.793 00:11:59.793 Discovery Log Number of Records 2, Generation counter 2 00:11:59.793 =====Discovery Log Entry 0====== 00:11:59.793 trtype: tcp 00:11:59.793 adrfam: ipv4 00:11:59.793 subtype: current discovery subsystem 00:11:59.793 treq: not required 00:11:59.793 portid: 0 00:11:59.793 trsvcid: 4420 00:11:59.793 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:59.793 traddr: 10.0.0.2 00:11:59.793 eflags: explicit discovery connections, duplicate discovery information 00:11:59.793 sectype: none 00:11:59.793 =====Discovery Log Entry 1====== 00:11:59.793 trtype: tcp 00:11:59.793 adrfam: ipv4 00:11:59.793 subtype: nvme subsystem 00:11:59.793 treq: not required 00:11:59.793 portid: 0 00:11:59.793 trsvcid: 4420 00:11:59.793 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:59.793 traddr: 10.0.0.2 00:11:59.793 eflags: none 00:11:59.793 sectype: none 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:59.793 20:10:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.171 20:10:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:01.171 20:10:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:01.171 20:10:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.171 20:10:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:01.171 20:10:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:01.171 20:10:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.125 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:03.385 /dev/nvme0n1 ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:03.385 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.644 20:10:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.644 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:03.644 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:03.644 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.903 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:03.903 20:10:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:03.903 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:03.903 rmmod nvme_tcp 00:12:03.903 rmmod nvme_fabrics 00:12:03.904 rmmod nvme_keyring 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4146786 ']' 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4146786 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4146786 ']' 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4146786 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4146786 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4146786' 00:12:03.904 killing process with pid 4146786 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4146786 00:12:03.904 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4146786 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.163 20:10:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.696 20:10:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.696 00:12:06.696 real 0m13.404s 00:12:06.696 user 0m22.842s 00:12:06.696 sys 0m4.911s 00:12:06.696 20:10:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.696 20:10:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.696 ************************************ 00:12:06.696 END TEST nvmf_nvme_cli 00:12:06.696 ************************************ 00:12:06.696 20:10:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:06.696 20:10:31 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:06.696 20:10:31 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:06.696 20:10:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.696 20:10:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.696 20:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.696 ************************************ 00:12:06.696 START TEST nvmf_vfio_user 00:12:06.696 ************************************ 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:06.696 * Looking for test storage... 00:12:06.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4148383 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4148383' 00:12:06.696 Process pid: 4148383 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4148383 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4148383 ']' 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.696 20:10:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:06.696 [2024-07-15 20:10:31.648363] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:12:06.696 [2024-07-15 20:10:31.648404] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.696 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.696 [2024-07-15 20:10:31.718545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.696 [2024-07-15 20:10:31.808920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.696 [2024-07-15 20:10:31.808976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.696 [2024-07-15 20:10:31.808987] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.696 [2024-07-15 20:10:31.808995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.696 [2024-07-15 20:10:31.809002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.696 [2024-07-15 20:10:31.809126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.696 [2024-07-15 20:10:31.809237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.696 [2024-07-15 20:10:31.809349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.696 [2024-07-15 20:10:31.809357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.263 20:10:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.263 20:10:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:07.263 20:10:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:08.199 20:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:08.457 20:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:08.457 20:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:08.457 20:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.457 20:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:08.457 20:10:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:08.716 Malloc1 00:12:08.975 20:10:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:09.233 20:10:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:09.492 20:10:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:09.750 20:10:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:09.750 20:10:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:09.750 20:10:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:10.009 Malloc2 00:12:10.009 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:10.267 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:10.267 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:10.525 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:10.525 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:10.525 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:10.525 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:10.525 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:10.525 20:10:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:10.525 [2024-07-15 20:10:35.870650] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:12:10.525 [2024-07-15 20:10:35.870687] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149187 ] 00:12:10.785 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.785 [2024-07-15 20:10:35.908741] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:10.785 [2024-07-15 20:10:35.912511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:10.785 [2024-07-15 20:10:35.912534] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3a6332f000 00:12:10.785 [2024-07-15 20:10:35.913511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.914512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.915511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.916522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.917528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.918530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.919527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.920545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:10.785 [2024-07-15 20:10:35.921553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:10.785 [2024-07-15 20:10:35.921566] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3a63324000 00:12:10.785 [2024-07-15 20:10:35.922978] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:10.785 [2024-07-15 20:10:35.942392] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:10.785 [2024-07-15 20:10:35.942426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:10.785 [2024-07-15 20:10:35.944697] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:10.785 [2024-07-15 20:10:35.944754] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:10.785 [2024-07-15 20:10:35.944852] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:10.785 [2024-07-15 20:10:35.944872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:10.785 [2024-07-15 20:10:35.944880] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:10.785 [2024-07-15 20:10:35.945701] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:10.785 [2024-07-15 20:10:35.945714] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:10.785 [2024-07-15 20:10:35.945723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:10.785 [2024-07-15 20:10:35.946708] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:10.785 [2024-07-15 20:10:35.946723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:10.785 [2024-07-15 20:10:35.946733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:10.785 [2024-07-15 20:10:35.947714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:10.785 [2024-07-15 20:10:35.947725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:10.785 [2024-07-15 20:10:35.948721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:10.785 [2024-07-15 20:10:35.948732] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:10.785 [2024-07-15 20:10:35.948738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:10.785 [2024-07-15 20:10:35.948747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:10.785 [2024-07-15 20:10:35.948853] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:10.785 [2024-07-15 20:10:35.948860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:10.785 [2024-07-15 20:10:35.948866] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:10.785 [2024-07-15 20:10:35.953264] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:10.785 [2024-07-15 20:10:35.953755] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:10.785 [2024-07-15 20:10:35.954771] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:10.785 [2024-07-15 20:10:35.955767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:10.785 [2024-07-15 20:10:35.955872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:10.785 [2024-07-15 20:10:35.956784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:10.785 [2024-07-15 20:10:35.956795] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:10.785 [2024-07-15 20:10:35.956801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.956826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:10.785 [2024-07-15 20:10:35.956840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.956857] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:10.785 [2024-07-15 20:10:35.956864] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.785 [2024-07-15 20:10:35.956880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.785 [2024-07-15 20:10:35.956941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:10.785 [2024-07-15 20:10:35.956951] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:10.785 [2024-07-15 20:10:35.956960] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:10.785 [2024-07-15 20:10:35.956966] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:10.785 [2024-07-15 20:10:35.956972] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:10.785 [2024-07-15 20:10:35.956978] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:10.785 [2024-07-15 20:10:35.956984] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:10.785 [2024-07-15 20:10:35.956990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.956999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:10.785 [2024-07-15 20:10:35.957031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:10.785 [2024-07-15 20:10:35.957048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.785 [2024-07-15 20:10:35.957059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.785 [2024-07-15 20:10:35.957070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.785 [2024-07-15 20:10:35.957080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.785 [2024-07-15 20:10:35.957087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:10.785 [2024-07-15 20:10:35.957123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:10.785 [2024-07-15 20:10:35.957131] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:10.785 [2024-07-15 20:10:35.957137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:10.785 [2024-07-15 20:10:35.957186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:10.785 [2024-07-15 20:10:35.957269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957289] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:10.785 [2024-07-15 20:10:35.957295] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:10.785 [2024-07-15 20:10:35.957303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:10.785 [2024-07-15 20:10:35.957324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:10.785 [2024-07-15 20:10:35.957337] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:10.785 [2024-07-15 20:10:35.957351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:10.785 [2024-07-15 20:10:35.957375] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.785 [2024-07-15 20:10:35.957383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.785 [2024-07-15 20:10:35.957405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:10.785 [2024-07-15 20:10:35.957419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:10.785 [2024-07-15 20:10:35.957429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957439] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:10.786 [2024-07-15 20:10:35.957444] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.786 [2024-07-15 20:10:35.957451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957527] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:10.786 [2024-07-15 20:10:35.957533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:10.786 [2024-07-15 20:10:35.957539] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:10.786 [2024-07-15 20:10:35.957560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957670] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:10.786 [2024-07-15 20:10:35.957677] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:10.786 [2024-07-15 20:10:35.957682] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:10.786 [2024-07-15 20:10:35.957686] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:10.786 [2024-07-15 20:10:35.957694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:10.786 [2024-07-15 20:10:35.957703] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:10.786 [2024-07-15 20:10:35.957709] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:10.786 [2024-07-15 20:10:35.957717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957726] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:10.786 [2024-07-15 20:10:35.957731] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:10.786 [2024-07-15 20:10:35.957739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957749] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:10.786 [2024-07-15 20:10:35.957754] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:10.786 [2024-07-15 20:10:35.957764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:10.786 [2024-07-15 20:10:35.957773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:10.786 [2024-07-15 20:10:35.957811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:10.786 ===================================================== 00:12:10.786 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:10.786 ===================================================== 00:12:10.786 Controller Capabilities/Features 00:12:10.786 ================================ 00:12:10.786 Vendor ID: 4e58 00:12:10.786 Subsystem Vendor ID: 4e58 00:12:10.786 Serial Number: SPDK1 00:12:10.786 Model Number: SPDK bdev Controller 00:12:10.786 Firmware Version: 24.09 00:12:10.786 Recommended Arb Burst: 6 00:12:10.786 IEEE OUI Identifier: 8d 6b 50 00:12:10.786 Multi-path I/O 00:12:10.786 May have multiple subsystem ports: Yes 00:12:10.786 May have multiple controllers: Yes 00:12:10.786 Associated with SR-IOV VF: No 00:12:10.786 Max Data Transfer Size: 131072 00:12:10.786 Max Number of Namespaces: 32 00:12:10.786 Max Number of I/O Queues: 127 00:12:10.786 NVMe Specification Version (VS): 1.3 00:12:10.786 NVMe Specification Version (Identify): 1.3 00:12:10.786 Maximum Queue Entries: 256 00:12:10.786 Contiguous Queues Required: Yes 00:12:10.786 Arbitration Mechanisms Supported 00:12:10.786 Weighted Round Robin: Not Supported 00:12:10.786 Vendor Specific: Not Supported 00:12:10.786 Reset Timeout: 15000 ms 00:12:10.786 Doorbell Stride: 4 bytes 00:12:10.786 NVM Subsystem Reset: Not Supported 00:12:10.786 Command Sets Supported 00:12:10.786 NVM Command Set: Supported 00:12:10.786 Boot Partition: Not Supported 00:12:10.786 Memory Page Size Minimum: 4096 bytes 00:12:10.786 Memory Page Size Maximum: 4096 bytes 00:12:10.786 Persistent Memory Region: Not Supported 00:12:10.786 Optional Asynchronous Events Supported 00:12:10.786 Namespace Attribute Notices: Supported 00:12:10.786 Firmware Activation Notices: Not Supported 00:12:10.786 ANA Change Notices: Not Supported 00:12:10.786 PLE Aggregate Log Change Notices: Not Supported 00:12:10.786 LBA Status Info Alert Notices: Not Supported 00:12:10.786 EGE Aggregate Log Change Notices: Not Supported 00:12:10.786 Normal NVM Subsystem Shutdown event: Not Supported 00:12:10.786 Zone Descriptor Change Notices: Not Supported 00:12:10.786 Discovery Log Change Notices: Not Supported 00:12:10.786 Controller Attributes 00:12:10.786 128-bit Host Identifier: Supported 00:12:10.786 Non-Operational Permissive Mode: Not Supported 00:12:10.786 NVM Sets: Not Supported 00:12:10.786 Read Recovery Levels: Not Supported 00:12:10.786 Endurance Groups: Not Supported 00:12:10.786 Predictable Latency Mode: Not Supported 00:12:10.786 Traffic Based Keep ALive: Not Supported 00:12:10.786 Namespace Granularity: Not Supported 00:12:10.786 SQ Associations: Not Supported 00:12:10.786 UUID List: Not Supported 00:12:10.786 Multi-Domain Subsystem: Not Supported 00:12:10.786 Fixed Capacity Management: Not Supported 00:12:10.786 Variable Capacity Management: Not Supported 00:12:10.786 Delete Endurance Group: Not Supported 00:12:10.786 Delete NVM Set: Not Supported 00:12:10.786 Extended LBA Formats Supported: Not Supported 00:12:10.786 Flexible Data Placement Supported: Not Supported 00:12:10.786 00:12:10.786 Controller Memory Buffer Support 00:12:10.786 ================================ 00:12:10.786 Supported: No 00:12:10.786 00:12:10.786 Persistent Memory Region Support 00:12:10.786 ================================ 00:12:10.786 Supported: No 00:12:10.786 00:12:10.786 Admin Command Set Attributes 00:12:10.786 ============================ 00:12:10.786 Security Send/Receive: Not Supported 00:12:10.786 Format NVM: Not Supported 00:12:10.786 Firmware Activate/Download: Not Supported 00:12:10.786 Namespace Management: Not Supported 00:12:10.786 Device Self-Test: Not Supported 00:12:10.786 Directives: Not Supported 00:12:10.786 NVMe-MI: Not Supported 00:12:10.786 Virtualization Management: Not Supported 00:12:10.786 Doorbell Buffer Config: Not Supported 00:12:10.786 Get LBA Status Capability: Not Supported 00:12:10.786 Command & Feature Lockdown Capability: Not Supported 00:12:10.786 Abort Command Limit: 4 00:12:10.786 Async Event Request Limit: 4 00:12:10.786 Number of Firmware Slots: N/A 00:12:10.786 Firmware Slot 1 Read-Only: N/A 00:12:10.786 Firmware Activation Without Reset: N/A 00:12:10.786 Multiple Update Detection Support: N/A 00:12:10.786 Firmware Update Granularity: No Information Provided 00:12:10.786 Per-Namespace SMART Log: No 00:12:10.786 Asymmetric Namespace Access Log Page: Not Supported 00:12:10.786 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:10.786 Command Effects Log Page: Supported 00:12:10.786 Get Log Page Extended Data: Supported 00:12:10.786 Telemetry Log Pages: Not Supported 00:12:10.786 Persistent Event Log Pages: Not Supported 00:12:10.786 Supported Log Pages Log Page: May Support 00:12:10.786 Commands Supported & Effects Log Page: Not Supported 00:12:10.786 Feature Identifiers & Effects Log Page:May Support 00:12:10.786 NVMe-MI Commands & Effects Log Page: May Support 00:12:10.786 Data Area 4 for Telemetry Log: Not Supported 00:12:10.786 Error Log Page Entries Supported: 128 00:12:10.786 Keep Alive: Supported 00:12:10.786 Keep Alive Granularity: 10000 ms 00:12:10.786 00:12:10.786 NVM Command Set Attributes 00:12:10.786 ========================== 00:12:10.786 Submission Queue Entry Size 00:12:10.786 Max: 64 00:12:10.786 Min: 64 00:12:10.786 Completion Queue Entry Size 00:12:10.786 Max: 16 00:12:10.786 Min: 16 00:12:10.786 Number of Namespaces: 32 00:12:10.786 Compare Command: Supported 00:12:10.786 Write Uncorrectable Command: Not Supported 00:12:10.786 Dataset Management Command: Supported 00:12:10.786 Write Zeroes Command: Supported 00:12:10.786 Set Features Save Field: Not Supported 00:12:10.786 Reservations: Not Supported 00:12:10.786 Timestamp: Not Supported 00:12:10.786 Copy: Supported 00:12:10.786 Volatile Write Cache: Present 00:12:10.786 Atomic Write Unit (Normal): 1 00:12:10.786 Atomic Write Unit (PFail): 1 00:12:10.786 Atomic Compare & Write Unit: 1 00:12:10.786 Fused Compare & Write: Supported 00:12:10.786 Scatter-Gather List 00:12:10.786 SGL Command Set: Supported (Dword aligned) 00:12:10.786 SGL Keyed: Not Supported 00:12:10.786 SGL Bit Bucket Descriptor: Not Supported 00:12:10.786 SGL Metadata Pointer: Not Supported 00:12:10.786 Oversized SGL: Not Supported 00:12:10.786 SGL Metadata Address: Not Supported 00:12:10.786 SGL Offset: Not Supported 00:12:10.786 Transport SGL Data Block: Not Supported 00:12:10.786 Replay Protected Memory Block: Not Supported 00:12:10.786 00:12:10.786 Firmware Slot Information 00:12:10.786 ========================= 00:12:10.786 Active slot: 1 00:12:10.786 Slot 1 Firmware Revision: 24.09 00:12:10.786 00:12:10.786 00:12:10.786 Commands Supported and Effects 00:12:10.786 ============================== 00:12:10.786 Admin Commands 00:12:10.786 -------------- 00:12:10.786 Get Log Page (02h): Supported 00:12:10.786 Identify (06h): Supported 00:12:10.786 Abort (08h): Supported 00:12:10.786 Set Features (09h): Supported 00:12:10.786 Get Features (0Ah): Supported 00:12:10.786 Asynchronous Event Request (0Ch): Supported 00:12:10.786 Keep Alive (18h): Supported 00:12:10.786 I/O Commands 00:12:10.786 ------------ 00:12:10.786 Flush (00h): Supported LBA-Change 00:12:10.786 Write (01h): Supported LBA-Change 00:12:10.786 Read (02h): Supported 00:12:10.786 Compare (05h): Supported 00:12:10.786 Write Zeroes (08h): Supported LBA-Change 00:12:10.786 Dataset Management (09h): Supported LBA-Change 00:12:10.786 Copy (19h): Supported LBA-Change 00:12:10.786 00:12:10.786 Error Log 00:12:10.786 ========= 00:12:10.786 00:12:10.786 Arbitration 00:12:10.786 =========== 00:12:10.786 Arbitration Burst: 1 00:12:10.786 00:12:10.786 Power Management 00:12:10.786 ================ 00:12:10.786 Number of Power States: 1 00:12:10.786 Current Power State: Power State #0 00:12:10.786 Power State #0: 00:12:10.786 Max Power: 0.00 W 00:12:10.786 Non-Operational State: Operational 00:12:10.786 Entry Latency: Not Reported 00:12:10.786 Exit Latency: Not Reported 00:12:10.786 Relative Read Throughput: 0 00:12:10.787 Relative Read Latency: 0 00:12:10.787 Relative Write Throughput: 0 00:12:10.787 Relative Write Latency: 0 00:12:10.787 Idle Power: Not Reported 00:12:10.787 Active Power: Not Reported 00:12:10.787 Non-Operational Permissive Mode: Not Supported 00:12:10.787 00:12:10.787 Health Information 00:12:10.787 ================== 00:12:10.787 Critical Warnings: 00:12:10.787 Available Spare Space: OK 00:12:10.787 Temperature: OK 00:12:10.787 Device Reliability: OK 00:12:10.787 Read Only: No 00:12:10.787 Volatile Memory Backup: OK 00:12:10.787 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:10.787 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:10.787 Available Spare: 0% 00:12:10.787 Available Sp[2024-07-15 20:10:35.957930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:10.787 [2024-07-15 20:10:35.957941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:10.787 [2024-07-15 20:10:35.957976] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:10.787 [2024-07-15 20:10:35.957988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.787 [2024-07-15 20:10:35.957996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.787 [2024-07-15 20:10:35.958005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.787 [2024-07-15 20:10:35.958012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.787 [2024-07-15 20:10:35.958797] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:10.787 [2024-07-15 20:10:35.958812] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:10.787 [2024-07-15 20:10:35.959796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:10.787 [2024-07-15 20:10:35.959860] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:10.787 [2024-07-15 20:10:35.959869] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:10.787 [2024-07-15 20:10:35.960809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:10.787 [2024-07-15 20:10:35.960823] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:10.787 [2024-07-15 20:10:35.960880] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:10.787 [2024-07-15 20:10:35.962839] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:10.787 are Threshold: 0% 00:12:10.787 Life Percentage Used: 0% 00:12:10.787 Data Units Read: 0 00:12:10.787 Data Units Written: 0 00:12:10.787 Host Read Commands: 0 00:12:10.787 Host Write Commands: 0 00:12:10.787 Controller Busy Time: 0 minutes 00:12:10.787 Power Cycles: 0 00:12:10.787 Power On Hours: 0 hours 00:12:10.787 Unsafe Shutdowns: 0 00:12:10.787 Unrecoverable Media Errors: 0 00:12:10.787 Lifetime Error Log Entries: 0 00:12:10.787 Warning Temperature Time: 0 minutes 00:12:10.787 Critical Temperature Time: 0 minutes 00:12:10.787 00:12:10.787 Number of Queues 00:12:10.787 ================ 00:12:10.787 Number of I/O Submission Queues: 127 00:12:10.787 Number of I/O Completion Queues: 127 00:12:10.787 00:12:10.787 Active Namespaces 00:12:10.787 ================= 00:12:10.787 Namespace ID:1 00:12:10.787 Error Recovery Timeout: Unlimited 00:12:10.787 Command Set Identifier: NVM (00h) 00:12:10.787 Deallocate: Supported 00:12:10.787 Deallocated/Unwritten Error: Not Supported 00:12:10.787 Deallocated Read Value: Unknown 00:12:10.787 Deallocate in Write Zeroes: Not Supported 00:12:10.787 Deallocated Guard Field: 0xFFFF 00:12:10.787 Flush: Supported 00:12:10.787 Reservation: Supported 00:12:10.787 Namespace Sharing Capabilities: Multiple Controllers 00:12:10.787 Size (in LBAs): 131072 (0GiB) 00:12:10.787 Capacity (in LBAs): 131072 (0GiB) 00:12:10.787 Utilization (in LBAs): 131072 (0GiB) 00:12:10.787 NGUID: 86AB6C074F7946D2908343C237C655FE 00:12:10.787 UUID: 86ab6c07-4f79-46d2-9083-43c237c655fe 00:12:10.787 Thin Provisioning: Not Supported 00:12:10.787 Per-NS Atomic Units: Yes 00:12:10.787 Atomic Boundary Size (Normal): 0 00:12:10.787 Atomic Boundary Size (PFail): 0 00:12:10.787 Atomic Boundary Offset: 0 00:12:10.787 Maximum Single Source Range Length: 65535 00:12:10.787 Maximum Copy Length: 65535 00:12:10.787 Maximum Source Range Count: 1 00:12:10.787 NGUID/EUI64 Never Reused: No 00:12:10.787 Namespace Write Protected: No 00:12:10.787 Number of LBA Formats: 1 00:12:10.787 Current LBA Format: LBA Format #00 00:12:10.787 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:10.787 00:12:10.787 20:10:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:10.787 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.044 [2024-07-15 20:10:36.211132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.309 Initializing NVMe Controllers 00:12:16.309 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:16.309 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:16.309 Initialization complete. Launching workers. 00:12:16.309 ======================================================== 00:12:16.309 Latency(us) 00:12:16.309 Device Information : IOPS MiB/s Average min max 00:12:16.309 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24207.86 94.56 5286.79 1423.49 8539.33 00:12:16.309 ======================================================== 00:12:16.309 Total : 24207.86 94.56 5286.79 1423.49 8539.33 00:12:16.309 00:12:16.309 [2024-07-15 20:10:41.234011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.309 20:10:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:16.309 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.309 [2024-07-15 20:10:41.484347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.583 Initializing NVMe Controllers 00:12:21.583 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:21.583 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:21.583 Initialization complete. Launching workers. 00:12:21.583 ======================================================== 00:12:21.583 Latency(us) 00:12:21.583 Device Information : IOPS MiB/s Average min max 00:12:21.583 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16028.22 62.61 7990.91 6796.59 12152.83 00:12:21.583 ======================================================== 00:12:21.583 Total : 16028.22 62.61 7990.91 6796.59 12152.83 00:12:21.583 00:12:21.583 [2024-07-15 20:10:46.526629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.583 20:10:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:21.583 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.583 [2024-07-15 20:10:46.784918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.856 [2024-07-15 20:10:51.858560] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.856 Initializing NVMe Controllers 00:12:26.856 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:26.856 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:26.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:26.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:26.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:26.856 Initialization complete. Launching workers. 00:12:26.856 Starting thread on core 2 00:12:26.856 Starting thread on core 3 00:12:26.856 Starting thread on core 1 00:12:26.856 20:10:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:26.856 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.856 [2024-07-15 20:10:52.191683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:31.042 [2024-07-15 20:10:56.123455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:31.042 Initializing NVMe Controllers 00:12:31.042 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:31.042 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:31.042 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:31.042 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:31.042 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:31.042 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:31.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:31.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:31.042 Initialization complete. Launching workers. 00:12:31.042 Starting thread on core 1 with urgent priority queue 00:12:31.042 Starting thread on core 2 with urgent priority queue 00:12:31.042 Starting thread on core 3 with urgent priority queue 00:12:31.042 Starting thread on core 0 with urgent priority queue 00:12:31.042 SPDK bdev Controller (SPDK1 ) core 0: 2722.33 IO/s 36.73 secs/100000 ios 00:12:31.042 SPDK bdev Controller (SPDK1 ) core 1: 3111.67 IO/s 32.14 secs/100000 ios 00:12:31.042 SPDK bdev Controller (SPDK1 ) core 2: 2709.67 IO/s 36.90 secs/100000 ios 00:12:31.042 SPDK bdev Controller (SPDK1 ) core 3: 2811.00 IO/s 35.57 secs/100000 ios 00:12:31.042 ======================================================== 00:12:31.042 00:12:31.042 20:10:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:31.042 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.301 [2024-07-15 20:10:56.445778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:31.301 Initializing NVMe Controllers 00:12:31.301 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:31.301 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:31.301 Namespace ID: 1 size: 0GB 00:12:31.301 Initialization complete. 00:12:31.301 INFO: using host memory buffer for IO 00:12:31.301 Hello world! 00:12:31.301 [2024-07-15 20:10:56.481308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:31.301 20:10:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:31.301 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.559 [2024-07-15 20:10:56.798817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.495 Initializing NVMe Controllers 00:12:32.495 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:32.495 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:32.495 Initialization complete. Launching workers. 00:12:32.495 submit (in ns) avg, min, max = 9496.6, 4543.6, 4037819.1 00:12:32.495 complete (in ns) avg, min, max = 19635.6, 2710.9, 6991127.3 00:12:32.495 00:12:32.495 Submit histogram 00:12:32.495 ================ 00:12:32.495 Range in us Cumulative Count 00:12:32.495 4.538 - 4.567: 0.3576% ( 61) 00:12:32.495 4.567 - 4.596: 1.4892% ( 193) 00:12:32.495 4.596 - 4.625: 3.4709% ( 338) 00:12:32.495 4.625 - 4.655: 7.2526% ( 645) 00:12:32.495 4.655 - 4.684: 17.5657% ( 1759) 00:12:32.495 4.684 - 4.713: 26.8703% ( 1587) 00:12:32.495 4.713 - 4.742: 38.6667% ( 2012) 00:12:32.495 4.742 - 4.771: 51.5303% ( 2194) 00:12:32.495 4.771 - 4.800: 61.9723% ( 1781) 00:12:32.495 4.800 - 4.829: 72.5668% ( 1807) 00:12:32.495 4.829 - 4.858: 79.1862% ( 1129) 00:12:32.495 4.858 - 4.887: 83.8649% ( 798) 00:12:32.495 4.887 - 4.916: 86.3450% ( 423) 00:12:32.495 4.916 - 4.945: 88.1391% ( 306) 00:12:32.495 4.945 - 4.975: 89.6400% ( 256) 00:12:32.495 4.975 - 5.004: 91.3286% ( 288) 00:12:32.495 5.004 - 5.033: 93.5448% ( 378) 00:12:32.495 5.033 - 5.062: 95.3565% ( 309) 00:12:32.495 5.062 - 5.091: 96.7695% ( 241) 00:12:32.495 5.091 - 5.120: 97.8248% ( 180) 00:12:32.495 5.120 - 5.149: 98.4932% ( 114) 00:12:32.495 5.149 - 5.178: 98.9857% ( 84) 00:12:32.495 5.178 - 5.207: 99.2788% ( 50) 00:12:32.495 5.207 - 5.236: 99.4078% ( 22) 00:12:32.495 5.236 - 5.265: 99.4723% ( 11) 00:12:32.495 5.265 - 5.295: 99.4899% ( 3) 00:12:32.495 5.324 - 5.353: 99.5016% ( 2) 00:12:32.495 5.411 - 5.440: 99.5134% ( 2) 00:12:32.495 7.127 - 7.156: 99.5192% ( 1) 00:12:32.495 7.418 - 7.447: 99.5310% ( 2) 00:12:32.495 7.447 - 7.505: 99.5427% ( 2) 00:12:32.495 7.505 - 7.564: 99.5603% ( 3) 00:12:32.495 7.622 - 7.680: 99.5661% ( 1) 00:12:32.495 7.680 - 7.738: 99.5720% ( 1) 00:12:32.495 7.796 - 7.855: 99.5779% ( 1) 00:12:32.495 7.855 - 7.913: 99.5837% ( 1) 00:12:32.495 7.971 - 8.029: 99.5896% ( 1) 00:12:32.495 8.087 - 8.145: 99.5955% ( 1) 00:12:32.495 8.262 - 8.320: 99.6013% ( 1) 00:12:32.495 8.436 - 8.495: 99.6072% ( 1) 00:12:32.495 8.495 - 8.553: 99.6130% ( 1) 00:12:32.495 8.553 - 8.611: 99.6189% ( 1) 00:12:32.495 8.785 - 8.844: 99.6306% ( 2) 00:12:32.495 8.844 - 8.902: 99.6365% ( 1) 00:12:32.495 8.960 - 9.018: 99.6482% ( 2) 00:12:32.495 9.018 - 9.076: 99.6541% ( 1) 00:12:32.495 9.076 - 9.135: 99.6599% ( 1) 00:12:32.495 9.251 - 9.309: 99.6658% ( 1) 00:12:32.495 9.367 - 9.425: 99.6717% ( 1) 00:12:32.495 9.425 - 9.484: 99.6775% ( 1) 00:12:32.495 9.542 - 9.600: 99.7010% ( 4) 00:12:32.495 9.600 - 9.658: 99.7068% ( 1) 00:12:32.495 9.658 - 9.716: 99.7244% ( 3) 00:12:32.495 9.716 - 9.775: 99.7303% ( 1) 00:12:32.495 9.775 - 9.833: 99.7479% ( 3) 00:12:32.495 9.949 - 10.007: 99.7596% ( 2) 00:12:32.495 10.007 - 10.065: 99.7713% ( 2) 00:12:32.495 10.124 - 10.182: 99.7948% ( 4) 00:12:32.495 10.182 - 10.240: 99.8065% ( 2) 00:12:32.495 10.240 - 10.298: 99.8124% ( 1) 00:12:32.495 10.356 - 10.415: 99.8300% ( 3) 00:12:32.495 10.415 - 10.473: 99.8358% ( 1) 00:12:32.495 10.473 - 10.531: 99.8417% ( 1) 00:12:32.495 10.764 - 10.822: 99.8476% ( 1) 00:12:32.495 10.880 - 10.938: 99.8534% ( 1) 00:12:32.495 10.996 - 11.055: 99.8593% ( 1) 00:12:32.495 11.055 - 11.113: 99.8652% ( 1) 00:12:32.495 11.927 - 11.985: 99.8710% ( 1) 00:12:32.495 14.022 - 14.080: 99.8769% ( 1) 00:12:32.495 55.389 - 55.622: 99.8827% ( 1) 00:12:32.495 3991.738 - 4021.527: 99.9941% ( 19) 00:12:32.495 4021.527 - 4051.316: 100.0000% ( 1) 00:12:32.495 00:12:32.495 Complete histogram 00:12:32.496 ================== 00:12:32.496 Range in us Cumulative Count 00:12:32.496 2.705 - 2.720: 0.0821% ( 14) 00:12:32.496 2.720 - 2.735: 0.8443% ( 130) 00:12:32.496 2.735 - 2.749: 1.8293% ( 168) 00:12:32.496 2.749 - 2.764: 2.7322% ( 154) 00:12:32.496 2.764 - 2.778: 5.5347% ( 478) 00:12:32.496 2.778 - [2024-07-15 20:10:57.822621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.754 2.793: 25.7856% ( 3454) 00:12:32.754 2.793 - 2.807: 64.0244% ( 6522) 00:12:32.754 2.807 - 2.822: 82.0591% ( 3076) 00:12:32.754 2.822 - 2.836: 86.1984% ( 706) 00:12:32.754 2.836 - 2.851: 88.3443% ( 366) 00:12:32.754 2.851 - 2.865: 89.9508% ( 274) 00:12:32.754 2.865 - 2.880: 92.9292% ( 508) 00:12:32.754 2.880 - 2.895: 96.5174% ( 612) 00:12:32.754 2.895 - 2.909: 98.2880% ( 302) 00:12:32.754 2.909 - 2.924: 98.7981% ( 87) 00:12:32.754 2.924 - 2.938: 99.0209% ( 38) 00:12:32.754 2.938 - 2.953: 99.1323% ( 19) 00:12:32.754 2.953 - 2.967: 99.1909% ( 10) 00:12:32.754 2.967 - 2.982: 99.2026% ( 2) 00:12:32.754 2.982 - 2.996: 99.2202% ( 3) 00:12:32.754 2.996 - 3.011: 99.2261% ( 1) 00:12:32.754 3.025 - 3.040: 99.2319% ( 1) 00:12:32.754 3.171 - 3.185: 99.2378% ( 1) 00:12:32.754 5.411 - 5.440: 99.2495% ( 2) 00:12:32.754 5.440 - 5.469: 99.2554% ( 1) 00:12:32.754 5.498 - 5.527: 99.2613% ( 1) 00:12:32.754 5.673 - 5.702: 99.2671% ( 1) 00:12:32.754 5.702 - 5.731: 99.2788% ( 2) 00:12:32.754 5.760 - 5.789: 99.2847% ( 1) 00:12:32.754 5.818 - 5.847: 99.2906% ( 1) 00:12:32.754 5.847 - 5.876: 99.2964% ( 1) 00:12:32.754 5.935 - 5.964: 99.3023% ( 1) 00:12:32.754 5.993 - 6.022: 99.3082% ( 1) 00:12:32.754 6.022 - 6.051: 99.3140% ( 1) 00:12:32.754 6.080 - 6.109: 99.3199% ( 1) 00:12:32.754 6.342 - 6.371: 99.3258% ( 1) 00:12:32.754 6.371 - 6.400: 99.3316% ( 1) 00:12:32.754 6.545 - 6.575: 99.3433% ( 2) 00:12:32.754 6.662 - 6.691: 99.3551% ( 2) 00:12:32.754 6.720 - 6.749: 99.3609% ( 1) 00:12:32.754 6.982 - 7.011: 99.3668% ( 1) 00:12:32.754 7.040 - 7.069: 99.3727% ( 1) 00:12:32.754 7.098 - 7.127: 99.3785% ( 1) 00:12:32.754 7.185 - 7.215: 99.3844% ( 1) 00:12:32.754 7.273 - 7.302: 99.3902% ( 1) 00:12:32.754 7.302 - 7.331: 99.3961% ( 1) 00:12:32.754 7.331 - 7.360: 99.4020% ( 1) 00:12:32.754 7.418 - 7.447: 99.4078% ( 1) 00:12:32.754 7.447 - 7.505: 99.4196% ( 2) 00:12:32.754 7.564 - 7.622: 99.4254% ( 1) 00:12:32.754 7.738 - 7.796: 99.4313% ( 1) 00:12:32.754 7.855 - 7.913: 99.4489% ( 3) 00:12:32.754 8.029 - 8.087: 99.4665% ( 3) 00:12:32.755 8.087 - 8.145: 99.4782% ( 2) 00:12:32.755 8.145 - 8.204: 99.4841% ( 1) 00:12:32.755 8.204 - 8.262: 99.4958% ( 2) 00:12:32.755 8.436 - 8.495: 99.5134% ( 3) 00:12:32.755 8.553 - 8.611: 99.5192% ( 1) 00:12:32.755 8.611 - 8.669: 99.5251% ( 1) 00:12:32.755 8.669 - 8.727: 99.5310% ( 1) 00:12:32.755 8.960 - 9.018: 99.5427% ( 2) 00:12:32.755 9.076 - 9.135: 99.5485% ( 1) 00:12:32.755 9.251 - 9.309: 99.5544% ( 1) 00:12:32.755 9.716 - 9.775: 99.5603% ( 1) 00:12:32.755 9.891 - 9.949: 99.5661% ( 1) 00:12:32.755 10.007 - 10.065: 99.5720% ( 1) 00:12:32.755 13.091 - 13.149: 99.5779% ( 1) 00:12:32.755 15.360 - 15.476: 99.5837% ( 1) 00:12:32.755 3813.004 - 3842.793: 99.5896% ( 1) 00:12:32.755 3991.738 - 4021.527: 99.9883% ( 68) 00:12:32.755 4170.473 - 4200.262: 99.9941% ( 1) 00:12:32.755 6970.647 - 7000.436: 100.0000% ( 1) 00:12:32.755 00:12:32.755 20:10:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:32.755 20:10:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.755 20:10:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.755 20:10:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:32.755 20:10:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:33.013 [ 00:12:33.013 { 00:12:33.013 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:33.013 "subtype": "Discovery", 00:12:33.013 "listen_addresses": [], 00:12:33.013 "allow_any_host": true, 00:12:33.013 "hosts": [] 00:12:33.013 }, 00:12:33.013 { 00:12:33.013 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:33.013 "subtype": "NVMe", 00:12:33.013 "listen_addresses": [ 00:12:33.013 { 00:12:33.013 "trtype": "VFIOUSER", 00:12:33.013 "adrfam": "IPv4", 00:12:33.013 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:33.013 "trsvcid": "0" 00:12:33.013 } 00:12:33.013 ], 00:12:33.013 "allow_any_host": true, 00:12:33.013 "hosts": [], 00:12:33.013 "serial_number": "SPDK1", 00:12:33.013 "model_number": "SPDK bdev Controller", 00:12:33.013 "max_namespaces": 32, 00:12:33.013 "min_cntlid": 1, 00:12:33.013 "max_cntlid": 65519, 00:12:33.013 "namespaces": [ 00:12:33.013 { 00:12:33.013 "nsid": 1, 00:12:33.013 "bdev_name": "Malloc1", 00:12:33.013 "name": "Malloc1", 00:12:33.013 "nguid": "86AB6C074F7946D2908343C237C655FE", 00:12:33.013 "uuid": "86ab6c07-4f79-46d2-9083-43c237c655fe" 00:12:33.013 } 00:12:33.013 ] 00:12:33.013 }, 00:12:33.013 { 00:12:33.013 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:33.013 "subtype": "NVMe", 00:12:33.013 "listen_addresses": [ 00:12:33.013 { 00:12:33.013 "trtype": "VFIOUSER", 00:12:33.013 "adrfam": "IPv4", 00:12:33.013 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:33.013 "trsvcid": "0" 00:12:33.013 } 00:12:33.013 ], 00:12:33.013 "allow_any_host": true, 00:12:33.013 "hosts": [], 00:12:33.013 "serial_number": "SPDK2", 00:12:33.013 "model_number": "SPDK bdev Controller", 00:12:33.013 "max_namespaces": 32, 00:12:33.013 "min_cntlid": 1, 00:12:33.013 "max_cntlid": 65519, 00:12:33.013 "namespaces": [ 00:12:33.013 { 00:12:33.013 "nsid": 1, 00:12:33.013 "bdev_name": "Malloc2", 00:12:33.013 "name": "Malloc2", 00:12:33.013 "nguid": "9238B0227268404082A432C2B78EDD55", 00:12:33.013 "uuid": "9238b022-7268-4040-82a4-32c2b78edd55" 00:12:33.013 } 00:12:33.013 ] 00:12:33.013 } 00:12:33.013 ] 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4153130 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:33.013 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:33.013 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.013 [2024-07-15 20:10:58.323956] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.271 Malloc3 00:12:33.271 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:33.528 [2024-07-15 20:10:58.653829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.528 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:33.528 Asynchronous Event Request test 00:12:33.528 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.528 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.528 Registering asynchronous event callbacks... 00:12:33.528 Starting namespace attribute notice tests for all controllers... 00:12:33.528 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:33.528 aer_cb - Changed Namespace 00:12:33.528 Cleaning up... 00:12:33.788 [ 00:12:33.788 { 00:12:33.788 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:33.788 "subtype": "Discovery", 00:12:33.788 "listen_addresses": [], 00:12:33.788 "allow_any_host": true, 00:12:33.788 "hosts": [] 00:12:33.788 }, 00:12:33.788 { 00:12:33.788 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:33.788 "subtype": "NVMe", 00:12:33.788 "listen_addresses": [ 00:12:33.788 { 00:12:33.788 "trtype": "VFIOUSER", 00:12:33.788 "adrfam": "IPv4", 00:12:33.788 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:33.788 "trsvcid": "0" 00:12:33.788 } 00:12:33.788 ], 00:12:33.788 "allow_any_host": true, 00:12:33.788 "hosts": [], 00:12:33.788 "serial_number": "SPDK1", 00:12:33.788 "model_number": "SPDK bdev Controller", 00:12:33.788 "max_namespaces": 32, 00:12:33.788 "min_cntlid": 1, 00:12:33.788 "max_cntlid": 65519, 00:12:33.788 "namespaces": [ 00:12:33.788 { 00:12:33.788 "nsid": 1, 00:12:33.788 "bdev_name": "Malloc1", 00:12:33.788 "name": "Malloc1", 00:12:33.788 "nguid": "86AB6C074F7946D2908343C237C655FE", 00:12:33.788 "uuid": "86ab6c07-4f79-46d2-9083-43c237c655fe" 00:12:33.788 }, 00:12:33.788 { 00:12:33.788 "nsid": 2, 00:12:33.788 "bdev_name": "Malloc3", 00:12:33.788 "name": "Malloc3", 00:12:33.788 "nguid": "53D3EAEDD21C49EBA4F74128138FD300", 00:12:33.788 "uuid": "53d3eaed-d21c-49eb-a4f7-4128138fd300" 00:12:33.788 } 00:12:33.788 ] 00:12:33.788 }, 00:12:33.788 { 00:12:33.788 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:33.788 "subtype": "NVMe", 00:12:33.788 "listen_addresses": [ 00:12:33.788 { 00:12:33.788 "trtype": "VFIOUSER", 00:12:33.788 "adrfam": "IPv4", 00:12:33.788 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:33.788 "trsvcid": "0" 00:12:33.788 } 00:12:33.788 ], 00:12:33.788 "allow_any_host": true, 00:12:33.788 "hosts": [], 00:12:33.788 "serial_number": "SPDK2", 00:12:33.788 "model_number": "SPDK bdev Controller", 00:12:33.788 "max_namespaces": 32, 00:12:33.788 "min_cntlid": 1, 00:12:33.788 "max_cntlid": 65519, 00:12:33.788 "namespaces": [ 00:12:33.788 { 00:12:33.788 "nsid": 1, 00:12:33.788 "bdev_name": "Malloc2", 00:12:33.788 "name": "Malloc2", 00:12:33.788 "nguid": "9238B0227268404082A432C2B78EDD55", 00:12:33.788 "uuid": "9238b022-7268-4040-82a4-32c2b78edd55" 00:12:33.788 } 00:12:33.788 ] 00:12:33.788 } 00:12:33.788 ] 00:12:33.788 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4153130 00:12:33.788 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:33.788 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:33.788 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:33.788 20:10:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:33.788 [2024-07-15 20:10:58.965385] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:12:33.788 [2024-07-15 20:10:58.965420] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153344 ] 00:12:33.788 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.788 [2024-07-15 20:10:59.003655] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:33.788 [2024-07-15 20:10:59.005963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:33.788 [2024-07-15 20:10:59.005989] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f00f68c1000 00:12:33.788 [2024-07-15 20:10:59.006962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.007978] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.008982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.009998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.011003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.012015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.013018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.014024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:33.788 [2024-07-15 20:10:59.015026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:33.788 [2024-07-15 20:10:59.015041] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f00f68b6000 00:12:33.788 [2024-07-15 20:10:59.016459] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:33.788 [2024-07-15 20:10:59.036219] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:33.788 [2024-07-15 20:10:59.036248] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:33.788 [2024-07-15 20:10:59.038346] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:33.788 [2024-07-15 20:10:59.038403] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:33.788 [2024-07-15 20:10:59.038500] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:33.788 [2024-07-15 20:10:59.038522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:33.788 [2024-07-15 20:10:59.038530] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:33.788 [2024-07-15 20:10:59.039353] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:33.788 [2024-07-15 20:10:59.039367] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:33.788 [2024-07-15 20:10:59.039376] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:33.788 [2024-07-15 20:10:59.040357] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:33.788 [2024-07-15 20:10:59.040370] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:33.788 [2024-07-15 20:10:59.040380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:33.788 [2024-07-15 20:10:59.041360] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:33.788 [2024-07-15 20:10:59.041373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:33.788 [2024-07-15 20:10:59.042373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:33.788 [2024-07-15 20:10:59.042385] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:33.788 [2024-07-15 20:10:59.042391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:33.788 [2024-07-15 20:10:59.042400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:33.788 [2024-07-15 20:10:59.042507] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:33.788 [2024-07-15 20:10:59.042518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:33.788 [2024-07-15 20:10:59.042524] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:33.788 [2024-07-15 20:10:59.043378] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:33.788 [2024-07-15 20:10:59.044377] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:33.788 [2024-07-15 20:10:59.045387] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:33.788 [2024-07-15 20:10:59.046393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:33.788 [2024-07-15 20:10:59.046444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:33.788 [2024-07-15 20:10:59.047410] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:33.788 [2024-07-15 20:10:59.047423] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:33.788 [2024-07-15 20:10:59.047429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.047454] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:33.788 [2024-07-15 20:10:59.047464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.047480] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:33.788 [2024-07-15 20:10:59.047486] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:33.788 [2024-07-15 20:10:59.047500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:33.788 [2024-07-15 20:10:59.055266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:33.788 [2024-07-15 20:10:59.055281] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:33.788 [2024-07-15 20:10:59.055290] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:33.788 [2024-07-15 20:10:59.055296] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:33.788 [2024-07-15 20:10:59.055302] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:33.788 [2024-07-15 20:10:59.055308] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:33.788 [2024-07-15 20:10:59.055314] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:33.788 [2024-07-15 20:10:59.055321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.055330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.055343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:33.788 [2024-07-15 20:10:59.063263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:33.788 [2024-07-15 20:10:59.063286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.788 [2024-07-15 20:10:59.063297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.788 [2024-07-15 20:10:59.063308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.788 [2024-07-15 20:10:59.063318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.788 [2024-07-15 20:10:59.063324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.063335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.063347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:33.788 [2024-07-15 20:10:59.071262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:33.788 [2024-07-15 20:10:59.071272] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:33.788 [2024-07-15 20:10:59.071278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.071287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.071294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.071305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:33.788 [2024-07-15 20:10:59.079261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:33.788 [2024-07-15 20:10:59.079345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.079355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.079366] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:33.788 [2024-07-15 20:10:59.079372] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:33.788 [2024-07-15 20:10:59.079380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:33.788 [2024-07-15 20:10:59.087262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:33.788 [2024-07-15 20:10:59.087277] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:33.788 [2024-07-15 20:10:59.087292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:33.788 [2024-07-15 20:10:59.087301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.087311] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:33.789 [2024-07-15 20:10:59.087317] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:33.789 [2024-07-15 20:10:59.087328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.095264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:33.789 [2024-07-15 20:10:59.095284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.095295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.095305] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:33.789 [2024-07-15 20:10:59.095313] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:33.789 [2024-07-15 20:10:59.095321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.103261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:33.789 [2024-07-15 20:10:59.103275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103321] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:33.789 [2024-07-15 20:10:59.103327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:33.789 [2024-07-15 20:10:59.103333] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:33.789 [2024-07-15 20:10:59.103353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.111262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:33.789 [2024-07-15 20:10:59.111280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.119262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:33.789 [2024-07-15 20:10:59.119280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.127261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:33.789 [2024-07-15 20:10:59.127278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.135263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:33.789 [2024-07-15 20:10:59.135287] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:33.789 [2024-07-15 20:10:59.135294] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:33.789 [2024-07-15 20:10:59.135299] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:33.789 [2024-07-15 20:10:59.135304] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:33.789 [2024-07-15 20:10:59.135312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:33.789 [2024-07-15 20:10:59.135321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:33.789 [2024-07-15 20:10:59.135327] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:33.789 [2024-07-15 20:10:59.135335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.135344] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:33.789 [2024-07-15 20:10:59.135350] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:33.789 [2024-07-15 20:10:59.135357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:33.789 [2024-07-15 20:10:59.135366] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:33.789 [2024-07-15 20:10:59.135372] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:33.789 [2024-07-15 20:10:59.135380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:34.049 [2024-07-15 20:10:59.143264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:34.049 [2024-07-15 20:10:59.143284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:34.049 [2024-07-15 20:10:59.143298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:34.049 [2024-07-15 20:10:59.143308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:34.049 ===================================================== 00:12:34.049 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:34.049 ===================================================== 00:12:34.049 Controller Capabilities/Features 00:12:34.049 ================================ 00:12:34.049 Vendor ID: 4e58 00:12:34.049 Subsystem Vendor ID: 4e58 00:12:34.049 Serial Number: SPDK2 00:12:34.049 Model Number: SPDK bdev Controller 00:12:34.049 Firmware Version: 24.09 00:12:34.049 Recommended Arb Burst: 6 00:12:34.049 IEEE OUI Identifier: 8d 6b 50 00:12:34.049 Multi-path I/O 00:12:34.049 May have multiple subsystem ports: Yes 00:12:34.049 May have multiple controllers: Yes 00:12:34.049 Associated with SR-IOV VF: No 00:12:34.049 Max Data Transfer Size: 131072 00:12:34.049 Max Number of Namespaces: 32 00:12:34.049 Max Number of I/O Queues: 127 00:12:34.049 NVMe Specification Version (VS): 1.3 00:12:34.049 NVMe Specification Version (Identify): 1.3 00:12:34.049 Maximum Queue Entries: 256 00:12:34.049 Contiguous Queues Required: Yes 00:12:34.049 Arbitration Mechanisms Supported 00:12:34.049 Weighted Round Robin: Not Supported 00:12:34.049 Vendor Specific: Not Supported 00:12:34.049 Reset Timeout: 15000 ms 00:12:34.049 Doorbell Stride: 4 bytes 00:12:34.049 NVM Subsystem Reset: Not Supported 00:12:34.049 Command Sets Supported 00:12:34.049 NVM Command Set: Supported 00:12:34.049 Boot Partition: Not Supported 00:12:34.049 Memory Page Size Minimum: 4096 bytes 00:12:34.049 Memory Page Size Maximum: 4096 bytes 00:12:34.049 Persistent Memory Region: Not Supported 00:12:34.049 Optional Asynchronous Events Supported 00:12:34.049 Namespace Attribute Notices: Supported 00:12:34.049 Firmware Activation Notices: Not Supported 00:12:34.049 ANA Change Notices: Not Supported 00:12:34.049 PLE Aggregate Log Change Notices: Not Supported 00:12:34.049 LBA Status Info Alert Notices: Not Supported 00:12:34.049 EGE Aggregate Log Change Notices: Not Supported 00:12:34.049 Normal NVM Subsystem Shutdown event: Not Supported 00:12:34.049 Zone Descriptor Change Notices: Not Supported 00:12:34.049 Discovery Log Change Notices: Not Supported 00:12:34.049 Controller Attributes 00:12:34.049 128-bit Host Identifier: Supported 00:12:34.049 Non-Operational Permissive Mode: Not Supported 00:12:34.049 NVM Sets: Not Supported 00:12:34.049 Read Recovery Levels: Not Supported 00:12:34.049 Endurance Groups: Not Supported 00:12:34.049 Predictable Latency Mode: Not Supported 00:12:34.049 Traffic Based Keep ALive: Not Supported 00:12:34.049 Namespace Granularity: Not Supported 00:12:34.049 SQ Associations: Not Supported 00:12:34.049 UUID List: Not Supported 00:12:34.049 Multi-Domain Subsystem: Not Supported 00:12:34.049 Fixed Capacity Management: Not Supported 00:12:34.049 Variable Capacity Management: Not Supported 00:12:34.049 Delete Endurance Group: Not Supported 00:12:34.049 Delete NVM Set: Not Supported 00:12:34.049 Extended LBA Formats Supported: Not Supported 00:12:34.049 Flexible Data Placement Supported: Not Supported 00:12:34.049 00:12:34.049 Controller Memory Buffer Support 00:12:34.049 ================================ 00:12:34.049 Supported: No 00:12:34.049 00:12:34.049 Persistent Memory Region Support 00:12:34.049 ================================ 00:12:34.049 Supported: No 00:12:34.049 00:12:34.049 Admin Command Set Attributes 00:12:34.049 ============================ 00:12:34.049 Security Send/Receive: Not Supported 00:12:34.049 Format NVM: Not Supported 00:12:34.049 Firmware Activate/Download: Not Supported 00:12:34.049 Namespace Management: Not Supported 00:12:34.049 Device Self-Test: Not Supported 00:12:34.049 Directives: Not Supported 00:12:34.049 NVMe-MI: Not Supported 00:12:34.049 Virtualization Management: Not Supported 00:12:34.049 Doorbell Buffer Config: Not Supported 00:12:34.049 Get LBA Status Capability: Not Supported 00:12:34.049 Command & Feature Lockdown Capability: Not Supported 00:12:34.049 Abort Command Limit: 4 00:12:34.049 Async Event Request Limit: 4 00:12:34.049 Number of Firmware Slots: N/A 00:12:34.049 Firmware Slot 1 Read-Only: N/A 00:12:34.049 Firmware Activation Without Reset: N/A 00:12:34.049 Multiple Update Detection Support: N/A 00:12:34.049 Firmware Update Granularity: No Information Provided 00:12:34.049 Per-Namespace SMART Log: No 00:12:34.049 Asymmetric Namespace Access Log Page: Not Supported 00:12:34.049 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:34.049 Command Effects Log Page: Supported 00:12:34.049 Get Log Page Extended Data: Supported 00:12:34.049 Telemetry Log Pages: Not Supported 00:12:34.049 Persistent Event Log Pages: Not Supported 00:12:34.049 Supported Log Pages Log Page: May Support 00:12:34.049 Commands Supported & Effects Log Page: Not Supported 00:12:34.049 Feature Identifiers & Effects Log Page:May Support 00:12:34.049 NVMe-MI Commands & Effects Log Page: May Support 00:12:34.049 Data Area 4 for Telemetry Log: Not Supported 00:12:34.049 Error Log Page Entries Supported: 128 00:12:34.049 Keep Alive: Supported 00:12:34.049 Keep Alive Granularity: 10000 ms 00:12:34.049 00:12:34.049 NVM Command Set Attributes 00:12:34.049 ========================== 00:12:34.049 Submission Queue Entry Size 00:12:34.049 Max: 64 00:12:34.049 Min: 64 00:12:34.049 Completion Queue Entry Size 00:12:34.049 Max: 16 00:12:34.049 Min: 16 00:12:34.049 Number of Namespaces: 32 00:12:34.049 Compare Command: Supported 00:12:34.049 Write Uncorrectable Command: Not Supported 00:12:34.049 Dataset Management Command: Supported 00:12:34.049 Write Zeroes Command: Supported 00:12:34.049 Set Features Save Field: Not Supported 00:12:34.049 Reservations: Not Supported 00:12:34.049 Timestamp: Not Supported 00:12:34.049 Copy: Supported 00:12:34.049 Volatile Write Cache: Present 00:12:34.049 Atomic Write Unit (Normal): 1 00:12:34.049 Atomic Write Unit (PFail): 1 00:12:34.049 Atomic Compare & Write Unit: 1 00:12:34.049 Fused Compare & Write: Supported 00:12:34.049 Scatter-Gather List 00:12:34.049 SGL Command Set: Supported (Dword aligned) 00:12:34.049 SGL Keyed: Not Supported 00:12:34.049 SGL Bit Bucket Descriptor: Not Supported 00:12:34.049 SGL Metadata Pointer: Not Supported 00:12:34.049 Oversized SGL: Not Supported 00:12:34.049 SGL Metadata Address: Not Supported 00:12:34.049 SGL Offset: Not Supported 00:12:34.049 Transport SGL Data Block: Not Supported 00:12:34.049 Replay Protected Memory Block: Not Supported 00:12:34.049 00:12:34.049 Firmware Slot Information 00:12:34.049 ========================= 00:12:34.049 Active slot: 1 00:12:34.049 Slot 1 Firmware Revision: 24.09 00:12:34.049 00:12:34.049 00:12:34.049 Commands Supported and Effects 00:12:34.049 ============================== 00:12:34.049 Admin Commands 00:12:34.049 -------------- 00:12:34.049 Get Log Page (02h): Supported 00:12:34.049 Identify (06h): Supported 00:12:34.049 Abort (08h): Supported 00:12:34.049 Set Features (09h): Supported 00:12:34.049 Get Features (0Ah): Supported 00:12:34.049 Asynchronous Event Request (0Ch): Supported 00:12:34.049 Keep Alive (18h): Supported 00:12:34.049 I/O Commands 00:12:34.049 ------------ 00:12:34.049 Flush (00h): Supported LBA-Change 00:12:34.049 Write (01h): Supported LBA-Change 00:12:34.049 Read (02h): Supported 00:12:34.049 Compare (05h): Supported 00:12:34.049 Write Zeroes (08h): Supported LBA-Change 00:12:34.049 Dataset Management (09h): Supported LBA-Change 00:12:34.049 Copy (19h): Supported LBA-Change 00:12:34.049 00:12:34.050 Error Log 00:12:34.050 ========= 00:12:34.050 00:12:34.050 Arbitration 00:12:34.050 =========== 00:12:34.050 Arbitration Burst: 1 00:12:34.050 00:12:34.050 Power Management 00:12:34.050 ================ 00:12:34.050 Number of Power States: 1 00:12:34.050 Current Power State: Power State #0 00:12:34.050 Power State #0: 00:12:34.050 Max Power: 0.00 W 00:12:34.050 Non-Operational State: Operational 00:12:34.050 Entry Latency: Not Reported 00:12:34.050 Exit Latency: Not Reported 00:12:34.050 Relative Read Throughput: 0 00:12:34.050 Relative Read Latency: 0 00:12:34.050 Relative Write Throughput: 0 00:12:34.050 Relative Write Latency: 0 00:12:34.050 Idle Power: Not Reported 00:12:34.050 Active Power: Not Reported 00:12:34.050 Non-Operational Permissive Mode: Not Supported 00:12:34.050 00:12:34.050 Health Information 00:12:34.050 ================== 00:12:34.050 Critical Warnings: 00:12:34.050 Available Spare Space: OK 00:12:34.050 Temperature: OK 00:12:34.050 Device Reliability: OK 00:12:34.050 Read Only: No 00:12:34.050 Volatile Memory Backup: OK 00:12:34.050 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:34.050 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:34.050 Available Spare: 0% 00:12:34.050 Available Sp[2024-07-15 20:10:59.143425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:34.050 [2024-07-15 20:10:59.151264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:34.050 [2024-07-15 20:10:59.151305] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:34.050 [2024-07-15 20:10:59.151317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.050 [2024-07-15 20:10:59.151326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.050 [2024-07-15 20:10:59.151333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.050 [2024-07-15 20:10:59.151341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.050 [2024-07-15 20:10:59.151409] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:34.050 [2024-07-15 20:10:59.151424] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:34.050 [2024-07-15 20:10:59.152418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:34.050 [2024-07-15 20:10:59.152483] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:34.050 [2024-07-15 20:10:59.152491] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:34.050 [2024-07-15 20:10:59.153426] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:34.050 [2024-07-15 20:10:59.153442] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:34.050 [2024-07-15 20:10:59.153497] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:34.050 [2024-07-15 20:10:59.156262] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:34.050 are Threshold: 0% 00:12:34.050 Life Percentage Used: 0% 00:12:34.050 Data Units Read: 0 00:12:34.050 Data Units Written: 0 00:12:34.050 Host Read Commands: 0 00:12:34.050 Host Write Commands: 0 00:12:34.050 Controller Busy Time: 0 minutes 00:12:34.050 Power Cycles: 0 00:12:34.050 Power On Hours: 0 hours 00:12:34.050 Unsafe Shutdowns: 0 00:12:34.050 Unrecoverable Media Errors: 0 00:12:34.050 Lifetime Error Log Entries: 0 00:12:34.050 Warning Temperature Time: 0 minutes 00:12:34.050 Critical Temperature Time: 0 minutes 00:12:34.050 00:12:34.050 Number of Queues 00:12:34.050 ================ 00:12:34.050 Number of I/O Submission Queues: 127 00:12:34.050 Number of I/O Completion Queues: 127 00:12:34.050 00:12:34.050 Active Namespaces 00:12:34.050 ================= 00:12:34.050 Namespace ID:1 00:12:34.050 Error Recovery Timeout: Unlimited 00:12:34.050 Command Set Identifier: NVM (00h) 00:12:34.050 Deallocate: Supported 00:12:34.050 Deallocated/Unwritten Error: Not Supported 00:12:34.050 Deallocated Read Value: Unknown 00:12:34.050 Deallocate in Write Zeroes: Not Supported 00:12:34.050 Deallocated Guard Field: 0xFFFF 00:12:34.050 Flush: Supported 00:12:34.050 Reservation: Supported 00:12:34.050 Namespace Sharing Capabilities: Multiple Controllers 00:12:34.050 Size (in LBAs): 131072 (0GiB) 00:12:34.050 Capacity (in LBAs): 131072 (0GiB) 00:12:34.050 Utilization (in LBAs): 131072 (0GiB) 00:12:34.050 NGUID: 9238B0227268404082A432C2B78EDD55 00:12:34.050 UUID: 9238b022-7268-4040-82a4-32c2b78edd55 00:12:34.050 Thin Provisioning: Not Supported 00:12:34.050 Per-NS Atomic Units: Yes 00:12:34.050 Atomic Boundary Size (Normal): 0 00:12:34.050 Atomic Boundary Size (PFail): 0 00:12:34.050 Atomic Boundary Offset: 0 00:12:34.050 Maximum Single Source Range Length: 65535 00:12:34.050 Maximum Copy Length: 65535 00:12:34.050 Maximum Source Range Count: 1 00:12:34.050 NGUID/EUI64 Never Reused: No 00:12:34.050 Namespace Write Protected: No 00:12:34.050 Number of LBA Formats: 1 00:12:34.050 Current LBA Format: LBA Format #00 00:12:34.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:34.050 00:12:34.050 20:10:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:34.050 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.309 [2024-07-15 20:10:59.404483] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:39.578 Initializing NVMe Controllers 00:12:39.578 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:39.578 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:39.578 Initialization complete. Launching workers. 00:12:39.578 ======================================================== 00:12:39.578 Latency(us) 00:12:39.578 Device Information : IOPS MiB/s Average min max 00:12:39.578 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 41437.41 161.86 3088.13 958.25 6021.44 00:12:39.578 ======================================================== 00:12:39.578 Total : 41437.41 161.86 3088.13 958.25 6021.44 00:12:39.578 00:12:39.578 [2024-07-15 20:11:04.512548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:39.578 20:11:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:39.578 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.578 [2024-07-15 20:11:04.763286] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:44.873 Initializing NVMe Controllers 00:12:44.873 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:44.873 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:44.873 Initialization complete. Launching workers. 00:12:44.873 ======================================================== 00:12:44.873 Latency(us) 00:12:44.873 Device Information : IOPS MiB/s Average min max 00:12:44.873 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24083.62 94.08 5314.68 1442.67 11541.17 00:12:44.873 ======================================================== 00:12:44.873 Total : 24083.62 94.08 5314.68 1442.67 11541.17 00:12:44.873 00:12:44.873 [2024-07-15 20:11:09.787206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:44.873 20:11:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:44.874 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.874 [2024-07-15 20:11:10.041065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.143 [2024-07-15 20:11:15.185375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.143 Initializing NVMe Controllers 00:12:50.143 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:50.143 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:50.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:50.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:50.143 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:50.143 Initialization complete. Launching workers. 00:12:50.143 Starting thread on core 2 00:12:50.143 Starting thread on core 3 00:12:50.143 Starting thread on core 1 00:12:50.143 20:11:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:50.143 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.404 [2024-07-15 20:11:15.525839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.770 [2024-07-15 20:11:18.590656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:53.770 Initializing NVMe Controllers 00:12:53.770 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:53.770 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:53.770 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:53.770 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:53.770 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:53.770 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:53.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:53.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:53.770 Initialization complete. Launching workers. 00:12:53.770 Starting thread on core 1 with urgent priority queue 00:12:53.770 Starting thread on core 2 with urgent priority queue 00:12:53.770 Starting thread on core 3 with urgent priority queue 00:12:53.770 Starting thread on core 0 with urgent priority queue 00:12:53.770 SPDK bdev Controller (SPDK2 ) core 0: 7309.67 IO/s 13.68 secs/100000 ios 00:12:53.770 SPDK bdev Controller (SPDK2 ) core 1: 8444.67 IO/s 11.84 secs/100000 ios 00:12:53.770 SPDK bdev Controller (SPDK2 ) core 2: 7381.33 IO/s 13.55 secs/100000 ios 00:12:53.770 SPDK bdev Controller (SPDK2 ) core 3: 9267.33 IO/s 10.79 secs/100000 ios 00:12:53.770 ======================================================== 00:12:53.770 00:12:53.770 20:11:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:53.770 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.770 [2024-07-15 20:11:18.915753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.770 Initializing NVMe Controllers 00:12:53.770 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:53.770 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:53.770 Namespace ID: 1 size: 0GB 00:12:53.770 Initialization complete. 00:12:53.770 INFO: using host memory buffer for IO 00:12:53.770 Hello world! 00:12:53.770 [2024-07-15 20:11:18.928838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:53.770 20:11:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:53.770 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.028 [2024-07-15 20:11:19.249089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.405 Initializing NVMe Controllers 00:12:55.405 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.405 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.405 Initialization complete. Launching workers. 00:12:55.405 submit (in ns) avg, min, max = 8730.7, 4531.8, 4004100.9 00:12:55.405 complete (in ns) avg, min, max = 30359.7, 2708.2, 4003679.1 00:12:55.405 00:12:55.405 Submit histogram 00:12:55.405 ================ 00:12:55.405 Range in us Cumulative Count 00:12:55.405 4.509 - 4.538: 0.0164% ( 2) 00:12:55.405 4.538 - 4.567: 1.1178% ( 134) 00:12:55.405 4.567 - 4.596: 3.3205% ( 268) 00:12:55.405 4.596 - 4.625: 6.2135% ( 352) 00:12:55.405 4.625 - 4.655: 10.8490% ( 564) 00:12:55.405 4.655 - 4.684: 25.0185% ( 1724) 00:12:55.405 4.684 - 4.713: 37.1825% ( 1480) 00:12:55.405 4.713 - 4.742: 48.5740% ( 1386) 00:12:55.405 4.742 - 4.771: 60.1381% ( 1407) 00:12:55.405 4.771 - 4.800: 69.2529% ( 1109) 00:12:55.405 4.800 - 4.829: 78.2280% ( 1092) 00:12:55.405 4.829 - 4.858: 82.7813% ( 554) 00:12:55.405 4.858 - 4.887: 85.3785% ( 316) 00:12:55.405 4.887 - 4.916: 86.7100% ( 162) 00:12:55.405 4.916 - 4.945: 88.4688% ( 214) 00:12:55.405 4.945 - 4.975: 90.2441% ( 216) 00:12:55.405 4.975 - 5.004: 92.2824% ( 248) 00:12:55.405 5.004 - 5.033: 94.3454% ( 251) 00:12:55.405 5.033 - 5.062: 96.2439% ( 231) 00:12:55.405 5.062 - 5.091: 97.3288% ( 132) 00:12:55.405 5.091 - 5.120: 98.2000% ( 106) 00:12:55.405 5.120 - 5.149: 98.6850% ( 59) 00:12:55.405 5.149 - 5.178: 99.0630% ( 46) 00:12:55.405 5.178 - 5.207: 99.2274% ( 20) 00:12:55.405 5.207 - 5.236: 99.3260% ( 12) 00:12:55.405 5.236 - 5.265: 99.3671% ( 5) 00:12:55.405 5.265 - 5.295: 99.3918% ( 3) 00:12:55.405 5.295 - 5.324: 99.4082% ( 2) 00:12:55.405 5.324 - 5.353: 99.4247% ( 2) 00:12:55.405 5.353 - 5.382: 99.4329% ( 1) 00:12:55.405 5.382 - 5.411: 99.4411% ( 1) 00:12:55.405 5.411 - 5.440: 99.4493% ( 1) 00:12:55.405 7.360 - 7.389: 99.4575% ( 1) 00:12:55.405 7.505 - 7.564: 99.4740% ( 2) 00:12:55.405 7.564 - 7.622: 99.4822% ( 1) 00:12:55.405 7.855 - 7.913: 99.4986% ( 2) 00:12:55.405 7.913 - 7.971: 99.5233% ( 3) 00:12:55.405 8.087 - 8.145: 99.5315% ( 1) 00:12:55.405 8.145 - 8.204: 99.5480% ( 2) 00:12:55.405 8.262 - 8.320: 99.5562% ( 1) 00:12:55.405 8.320 - 8.378: 99.5726% ( 2) 00:12:55.405 8.378 - 8.436: 99.5808% ( 1) 00:12:55.405 8.495 - 8.553: 99.6055% ( 3) 00:12:55.405 8.611 - 8.669: 99.6219% ( 2) 00:12:55.405 8.669 - 8.727: 99.6301% ( 1) 00:12:55.405 8.785 - 8.844: 99.6384% ( 1) 00:12:55.405 8.902 - 8.960: 99.6466% ( 1) 00:12:55.405 8.960 - 9.018: 99.6548% ( 1) 00:12:55.405 9.018 - 9.076: 99.6630% ( 1) 00:12:55.405 9.135 - 9.193: 99.6712% ( 1) 00:12:55.405 9.193 - 9.251: 99.6795% ( 1) 00:12:55.405 9.251 - 9.309: 99.6877% ( 1) 00:12:55.405 9.367 - 9.425: 99.7041% ( 2) 00:12:55.405 9.425 - 9.484: 99.7288% ( 3) 00:12:55.405 9.484 - 9.542: 99.7370% ( 1) 00:12:55.405 9.600 - 9.658: 99.7452% ( 1) 00:12:55.405 9.658 - 9.716: 99.7534% ( 1) 00:12:55.405 9.775 - 9.833: 99.7699% ( 2) 00:12:55.405 9.891 - 9.949: 99.7863% ( 2) 00:12:55.405 9.949 - 10.007: 99.7945% ( 1) 00:12:55.405 10.007 - 10.065: 99.8110% ( 2) 00:12:55.405 10.065 - 10.124: 99.8192% ( 1) 00:12:55.405 10.124 - 10.182: 99.8274% ( 1) 00:12:55.405 10.182 - 10.240: 99.8356% ( 1) 00:12:55.405 10.356 - 10.415: 99.8438% ( 1) 00:12:55.405 10.473 - 10.531: 99.8521% ( 1) 00:12:55.405 10.531 - 10.589: 99.8603% ( 1) 00:12:55.405 10.705 - 10.764: 99.8685% ( 1) 00:12:55.405 10.880 - 10.938: 99.8767% ( 1) 00:12:55.405 10.938 - 10.996: 99.8849% ( 1) 00:12:55.405 11.229 - 11.287: 99.8932% ( 1) 00:12:55.405 20.364 - 20.480: 99.9014% ( 1) 00:12:55.405 3991.738 - 4021.527: 100.0000% ( 12) 00:12:55.405 00:12:55.405 Complete histogram 00:12:55.405 ================== 00:12:55.405 Range in us Cumulative Count 00:12:55.405 2.705 - 2.720: 0.0575% ( 7) 00:12:55.405 2.720 - 2.735: 3.3944% ( 406) 00:12:55.405 2.735 - 2.749: 24.4267% ( 2559) 00:12:55.405 2.749 - 2.764: 46.6837% ( 2708) 00:12:55.405 2.764 - [2024-07-15 20:11:20.351884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.405 2.778: 54.5328% ( 955) 00:12:55.405 2.778 - 2.793: 64.2476% ( 1182) 00:12:55.405 2.793 - 2.807: 81.6800% ( 2121) 00:12:55.405 2.807 - 2.822: 90.6797% ( 1095) 00:12:55.405 2.822 - 2.836: 94.2550% ( 435) 00:12:55.405 2.836 - 2.851: 96.4165% ( 263) 00:12:55.405 2.851 - 2.865: 97.3617% ( 115) 00:12:55.405 2.865 - 2.880: 98.0028% ( 78) 00:12:55.405 2.880 - 2.895: 98.4137% ( 50) 00:12:55.405 2.895 - 2.909: 98.5781% ( 20) 00:12:55.405 2.909 - 2.924: 98.6932% ( 14) 00:12:55.405 2.924 - 2.938: 98.7343% ( 5) 00:12:55.405 2.938 - 2.953: 98.7754% ( 5) 00:12:55.405 2.953 - 2.967: 98.8000% ( 3) 00:12:55.405 2.967 - 2.982: 98.8576% ( 7) 00:12:55.405 2.996 - 3.011: 98.8987% ( 5) 00:12:55.405 3.011 - 3.025: 98.9151% ( 2) 00:12:55.405 3.025 - 3.040: 98.9233% ( 1) 00:12:55.405 3.040 - 3.055: 98.9480% ( 3) 00:12:55.405 3.055 - 3.069: 98.9562% ( 1) 00:12:55.405 3.069 - 3.084: 98.9644% ( 1) 00:12:55.405 3.084 - 3.098: 98.9726% ( 1) 00:12:55.405 5.382 - 5.411: 98.9808% ( 1) 00:12:55.405 5.411 - 5.440: 98.9891% ( 1) 00:12:55.405 5.498 - 5.527: 98.9973% ( 1) 00:12:55.405 5.585 - 5.615: 99.0055% ( 1) 00:12:55.405 5.673 - 5.702: 99.0137% ( 1) 00:12:55.405 5.993 - 6.022: 99.0219% ( 1) 00:12:55.405 6.022 - 6.051: 99.0302% ( 1) 00:12:55.405 6.167 - 6.196: 99.0384% ( 1) 00:12:55.405 6.429 - 6.458: 99.0466% ( 1) 00:12:55.405 6.662 - 6.691: 99.0548% ( 1) 00:12:55.405 6.691 - 6.720: 99.0713% ( 2) 00:12:55.405 6.982 - 7.011: 99.0795% ( 1) 00:12:55.405 7.069 - 7.098: 99.0877% ( 1) 00:12:55.405 7.127 - 7.156: 99.0959% ( 1) 00:12:55.405 7.156 - 7.185: 99.1041% ( 1) 00:12:55.405 7.215 - 7.244: 99.1206% ( 2) 00:12:55.405 7.331 - 7.360: 99.1288% ( 1) 00:12:55.405 7.389 - 7.418: 99.1370% ( 1) 00:12:55.405 7.447 - 7.505: 99.1534% ( 2) 00:12:55.405 7.505 - 7.564: 99.1617% ( 1) 00:12:55.405 7.564 - 7.622: 99.1699% ( 1) 00:12:55.405 7.622 - 7.680: 99.1863% ( 2) 00:12:55.405 7.738 - 7.796: 99.1945% ( 1) 00:12:55.405 7.796 - 7.855: 99.2028% ( 1) 00:12:55.405 7.913 - 7.971: 99.2192% ( 2) 00:12:55.405 8.145 - 8.204: 99.2274% ( 1) 00:12:55.405 8.262 - 8.320: 99.2356% ( 1) 00:12:55.405 8.378 - 8.436: 99.2521% ( 2) 00:12:55.405 8.436 - 8.495: 99.2603% ( 1) 00:12:55.405 9.251 - 9.309: 99.2685% ( 1) 00:12:55.405 9.425 - 9.484: 99.2767% ( 1) 00:12:55.405 10.647 - 10.705: 99.2850% ( 1) 00:12:55.405 12.916 - 12.975: 99.2932% ( 1) 00:12:55.405 14.022 - 14.080: 99.3014% ( 1) 00:12:55.405 18.502 - 18.618: 99.3096% ( 1) 00:12:55.405 3813.004 - 3842.793: 99.3178% ( 1) 00:12:55.405 3902.371 - 3932.160: 99.3260% ( 1) 00:12:55.405 3991.738 - 4021.527: 100.0000% ( 82) 00:12:55.405 00:12:55.405 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:55.405 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:55.405 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:55.405 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:55.405 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:55.405 [ 00:12:55.405 { 00:12:55.405 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:55.405 "subtype": "Discovery", 00:12:55.405 "listen_addresses": [], 00:12:55.405 "allow_any_host": true, 00:12:55.405 "hosts": [] 00:12:55.405 }, 00:12:55.405 { 00:12:55.406 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:55.406 "subtype": "NVMe", 00:12:55.406 "listen_addresses": [ 00:12:55.406 { 00:12:55.406 "trtype": "VFIOUSER", 00:12:55.406 "adrfam": "IPv4", 00:12:55.406 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:55.406 "trsvcid": "0" 00:12:55.406 } 00:12:55.406 ], 00:12:55.406 "allow_any_host": true, 00:12:55.406 "hosts": [], 00:12:55.406 "serial_number": "SPDK1", 00:12:55.406 "model_number": "SPDK bdev Controller", 00:12:55.406 "max_namespaces": 32, 00:12:55.406 "min_cntlid": 1, 00:12:55.406 "max_cntlid": 65519, 00:12:55.406 "namespaces": [ 00:12:55.406 { 00:12:55.406 "nsid": 1, 00:12:55.406 "bdev_name": "Malloc1", 00:12:55.406 "name": "Malloc1", 00:12:55.406 "nguid": "86AB6C074F7946D2908343C237C655FE", 00:12:55.406 "uuid": "86ab6c07-4f79-46d2-9083-43c237c655fe" 00:12:55.406 }, 00:12:55.406 { 00:12:55.406 "nsid": 2, 00:12:55.406 "bdev_name": "Malloc3", 00:12:55.406 "name": "Malloc3", 00:12:55.406 "nguid": "53D3EAEDD21C49EBA4F74128138FD300", 00:12:55.406 "uuid": "53d3eaed-d21c-49eb-a4f7-4128138fd300" 00:12:55.406 } 00:12:55.406 ] 00:12:55.406 }, 00:12:55.406 { 00:12:55.406 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:55.406 "subtype": "NVMe", 00:12:55.406 "listen_addresses": [ 00:12:55.406 { 00:12:55.406 "trtype": "VFIOUSER", 00:12:55.406 "adrfam": "IPv4", 00:12:55.406 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:55.406 "trsvcid": "0" 00:12:55.406 } 00:12:55.406 ], 00:12:55.406 "allow_any_host": true, 00:12:55.406 "hosts": [], 00:12:55.406 "serial_number": "SPDK2", 00:12:55.406 "model_number": "SPDK bdev Controller", 00:12:55.406 "max_namespaces": 32, 00:12:55.406 "min_cntlid": 1, 00:12:55.406 "max_cntlid": 65519, 00:12:55.406 "namespaces": [ 00:12:55.406 { 00:12:55.406 "nsid": 1, 00:12:55.406 "bdev_name": "Malloc2", 00:12:55.406 "name": "Malloc2", 00:12:55.406 "nguid": "9238B0227268404082A432C2B78EDD55", 00:12:55.406 "uuid": "9238b022-7268-4040-82a4-32c2b78edd55" 00:12:55.406 } 00:12:55.406 ] 00:12:55.406 } 00:12:55.406 ] 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4157173 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:55.406 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:55.406 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.664 [2024-07-15 20:11:20.857869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.664 Malloc4 00:12:55.664 20:11:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:55.922 [2024-07-15 20:11:21.187396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.922 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:55.922 Asynchronous Event Request test 00:12:55.922 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.922 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.922 Registering asynchronous event callbacks... 00:12:55.922 Starting namespace attribute notice tests for all controllers... 00:12:55.922 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:55.922 aer_cb - Changed Namespace 00:12:55.922 Cleaning up... 00:12:56.180 [ 00:12:56.181 { 00:12:56.181 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:56.181 "subtype": "Discovery", 00:12:56.181 "listen_addresses": [], 00:12:56.181 "allow_any_host": true, 00:12:56.181 "hosts": [] 00:12:56.181 }, 00:12:56.181 { 00:12:56.181 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:56.181 "subtype": "NVMe", 00:12:56.181 "listen_addresses": [ 00:12:56.181 { 00:12:56.181 "trtype": "VFIOUSER", 00:12:56.181 "adrfam": "IPv4", 00:12:56.181 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:56.181 "trsvcid": "0" 00:12:56.181 } 00:12:56.181 ], 00:12:56.181 "allow_any_host": true, 00:12:56.181 "hosts": [], 00:12:56.181 "serial_number": "SPDK1", 00:12:56.181 "model_number": "SPDK bdev Controller", 00:12:56.181 "max_namespaces": 32, 00:12:56.181 "min_cntlid": 1, 00:12:56.181 "max_cntlid": 65519, 00:12:56.181 "namespaces": [ 00:12:56.181 { 00:12:56.181 "nsid": 1, 00:12:56.181 "bdev_name": "Malloc1", 00:12:56.181 "name": "Malloc1", 00:12:56.181 "nguid": "86AB6C074F7946D2908343C237C655FE", 00:12:56.181 "uuid": "86ab6c07-4f79-46d2-9083-43c237c655fe" 00:12:56.181 }, 00:12:56.181 { 00:12:56.181 "nsid": 2, 00:12:56.181 "bdev_name": "Malloc3", 00:12:56.181 "name": "Malloc3", 00:12:56.181 "nguid": "53D3EAEDD21C49EBA4F74128138FD300", 00:12:56.181 "uuid": "53d3eaed-d21c-49eb-a4f7-4128138fd300" 00:12:56.181 } 00:12:56.181 ] 00:12:56.181 }, 00:12:56.181 { 00:12:56.181 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:56.181 "subtype": "NVMe", 00:12:56.181 "listen_addresses": [ 00:12:56.181 { 00:12:56.181 "trtype": "VFIOUSER", 00:12:56.181 "adrfam": "IPv4", 00:12:56.181 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:56.181 "trsvcid": "0" 00:12:56.181 } 00:12:56.181 ], 00:12:56.181 "allow_any_host": true, 00:12:56.181 "hosts": [], 00:12:56.181 "serial_number": "SPDK2", 00:12:56.181 "model_number": "SPDK bdev Controller", 00:12:56.181 "max_namespaces": 32, 00:12:56.181 "min_cntlid": 1, 00:12:56.181 "max_cntlid": 65519, 00:12:56.181 "namespaces": [ 00:12:56.181 { 00:12:56.181 "nsid": 1, 00:12:56.181 "bdev_name": "Malloc2", 00:12:56.181 "name": "Malloc2", 00:12:56.181 "nguid": "9238B0227268404082A432C2B78EDD55", 00:12:56.181 "uuid": "9238b022-7268-4040-82a4-32c2b78edd55" 00:12:56.181 }, 00:12:56.181 { 00:12:56.181 "nsid": 2, 00:12:56.181 "bdev_name": "Malloc4", 00:12:56.181 "name": "Malloc4", 00:12:56.181 "nguid": "DE58A8631C9044758722690EDC2BC4F1", 00:12:56.181 "uuid": "de58a863-1c90-4475-8722-690edc2bc4f1" 00:12:56.181 } 00:12:56.181 ] 00:12:56.181 } 00:12:56.181 ] 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4157173 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4148383 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4148383 ']' 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4148383 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4148383 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4148383' 00:12:56.181 killing process with pid 4148383 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4148383 00:12:56.181 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4148383 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4157348 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4157348' 00:12:56.752 Process pid: 4157348 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4157348 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4157348 ']' 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.752 20:11:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:56.752 [2024-07-15 20:11:21.872162] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:56.752 [2024-07-15 20:11:21.873443] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:12:56.752 [2024-07-15 20:11:21.873491] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.752 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.752 [2024-07-15 20:11:21.959860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.752 [2024-07-15 20:11:22.050894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.752 [2024-07-15 20:11:22.050939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.752 [2024-07-15 20:11:22.050949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.752 [2024-07-15 20:11:22.050957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.752 [2024-07-15 20:11:22.050966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.752 [2024-07-15 20:11:22.051023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.752 [2024-07-15 20:11:22.051051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.752 [2024-07-15 20:11:22.051141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.752 [2024-07-15 20:11:22.051145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.014 [2024-07-15 20:11:22.135674] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:57.014 [2024-07-15 20:11:22.135779] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:57.014 [2024-07-15 20:11:22.136264] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:57.014 [2024-07-15 20:11:22.136512] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:57.014 [2024-07-15 20:11:22.136817] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:57.581 20:11:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.581 20:11:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:57.581 20:11:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:58.521 20:11:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:58.780 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:58.780 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:58.780 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:58.780 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:58.780 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:59.039 Malloc1 00:12:59.039 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:59.298 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:59.557 20:11:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:59.816 20:11:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:59.816 20:11:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:59.816 20:11:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:00.075 Malloc2 00:13:00.075 20:11:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:00.336 20:11:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:00.594 20:11:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4157348 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4157348 ']' 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4157348 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:00.853 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4157348 00:13:01.112 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.112 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.112 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4157348' 00:13:01.112 killing process with pid 4157348 00:13:01.112 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4157348 00:13:01.112 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4157348 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:01.372 00:13:01.372 real 0m54.971s 00:13:01.372 user 3m38.029s 00:13:01.372 sys 0m4.182s 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:01.372 ************************************ 00:13:01.372 END TEST nvmf_vfio_user 00:13:01.372 ************************************ 00:13:01.372 20:11:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:01.372 20:11:26 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:01.372 20:11:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:01.372 20:11:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.372 20:11:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.372 ************************************ 00:13:01.372 START TEST nvmf_vfio_user_nvme_compliance 00:13:01.372 ************************************ 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:01.372 * Looking for test storage... 00:13:01.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4158413 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4158413' 00:13:01.372 Process pid: 4158413 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4158413 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 4158413 ']' 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.372 20:11:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:01.632 [2024-07-15 20:11:26.731055] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:13:01.632 [2024-07-15 20:11:26.731116] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.632 [2024-07-15 20:11:26.812564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.632 [2024-07-15 20:11:26.903508] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.632 [2024-07-15 20:11:26.903550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.632 [2024-07-15 20:11:26.903560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.632 [2024-07-15 20:11:26.903569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.632 [2024-07-15 20:11:26.903576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.632 [2024-07-15 20:11:26.903627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.632 [2024-07-15 20:11:26.903727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.632 [2024-07-15 20:11:26.903727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.891 20:11:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.891 20:11:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:01.891 20:11:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.827 malloc0 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.827 20:11:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:02.827 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.086 00:13:03.086 00:13:03.086 CUnit - A unit testing framework for C - Version 2.1-3 00:13:03.086 http://cunit.sourceforge.net/ 00:13:03.086 00:13:03.086 00:13:03.086 Suite: nvme_compliance 00:13:03.086 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 20:11:28.265791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.086 [2024-07-15 20:11:28.267253] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:03.086 [2024-07-15 20:11:28.267277] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:03.086 [2024-07-15 20:11:28.267286] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:03.086 [2024-07-15 20:11:28.268827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.086 passed 00:13:03.086 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 20:11:28.367506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.086 [2024-07-15 20:11:28.370524] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.086 passed 00:13:03.345 Test: admin_identify_ns ...[2024-07-15 20:11:28.473065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.345 [2024-07-15 20:11:28.533276] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:03.345 [2024-07-15 20:11:28.541267] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:03.345 [2024-07-15 20:11:28.562398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.345 passed 00:13:03.345 Test: admin_get_features_mandatory_features ...[2024-07-15 20:11:28.658218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.345 [2024-07-15 20:11:28.661245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.604 passed 00:13:03.604 Test: admin_get_features_optional_features ...[2024-07-15 20:11:28.760952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.604 [2024-07-15 20:11:28.763977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.604 passed 00:13:03.604 Test: admin_set_features_number_of_queues ...[2024-07-15 20:11:28.864068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.862 [2024-07-15 20:11:28.972364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.862 passed 00:13:03.862 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 20:11:29.070391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:03.862 [2024-07-15 20:11:29.073405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:03.862 passed 00:13:03.862 Test: admin_get_log_page_with_lpo ...[2024-07-15 20:11:29.172239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.120 [2024-07-15 20:11:29.242271] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:04.120 [2024-07-15 20:11:29.255332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.120 passed 00:13:04.120 Test: fabric_property_get ...[2024-07-15 20:11:29.354193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.120 [2024-07-15 20:11:29.355534] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:04.120 [2024-07-15 20:11:29.357212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.120 passed 00:13:04.120 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 20:11:29.454897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.120 [2024-07-15 20:11:29.456174] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:04.120 [2024-07-15 20:11:29.457914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.379 passed 00:13:04.379 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 20:11:29.558077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.379 [2024-07-15 20:11:29.646266] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:04.379 [2024-07-15 20:11:29.662260] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:04.379 [2024-07-15 20:11:29.670382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.379 passed 00:13:04.638 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 20:11:29.766145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.638 [2024-07-15 20:11:29.767449] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:04.638 [2024-07-15 20:11:29.769172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.638 passed 00:13:04.638 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 20:11:29.869083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.638 [2024-07-15 20:11:29.944262] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:04.638 [2024-07-15 20:11:29.967264] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:04.638 [2024-07-15 20:11:29.972383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.896 passed 00:13:04.896 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 20:11:30.071176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.896 [2024-07-15 20:11:30.072486] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:04.896 [2024-07-15 20:11:30.072512] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:04.896 [2024-07-15 20:11:30.074198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.896 passed 00:13:04.896 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 20:11:30.171153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.155 [2024-07-15 20:11:30.262266] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:05.155 [2024-07-15 20:11:30.270264] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:05.155 [2024-07-15 20:11:30.278273] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:05.155 [2024-07-15 20:11:30.286267] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:05.155 [2024-07-15 20:11:30.315434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.155 passed 00:13:05.155 Test: admin_create_io_sq_verify_pc ...[2024-07-15 20:11:30.416383] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.155 [2024-07-15 20:11:30.430271] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:05.155 [2024-07-15 20:11:30.447563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.155 passed 00:13:05.414 Test: admin_create_io_qp_max_qps ...[2024-07-15 20:11:30.549268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.349 [2024-07-15 20:11:31.667266] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:06.916 [2024-07-15 20:11:32.050383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.916 passed 00:13:06.916 Test: admin_create_io_sq_shared_cq ...[2024-07-15 20:11:32.152498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.175 [2024-07-15 20:11:32.293266] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:07.175 [2024-07-15 20:11:32.330362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.175 passed 00:13:07.175 00:13:07.175 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.175 suites 1 1 n/a 0 0 00:13:07.175 tests 18 18 18 0 0 00:13:07.175 asserts 360 360 360 0 n/a 00:13:07.175 00:13:07.175 Elapsed time = 1.718 seconds 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4158413 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 4158413 ']' 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 4158413 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4158413 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4158413' 00:13:07.175 killing process with pid 4158413 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 4158413 00:13:07.175 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 4158413 00:13:07.434 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:07.434 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:07.434 00:13:07.434 real 0m6.116s 00:13:07.434 user 0m17.201s 00:13:07.434 sys 0m0.504s 00:13:07.434 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.434 20:11:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:07.434 ************************************ 00:13:07.434 END TEST nvmf_vfio_user_nvme_compliance 00:13:07.434 ************************************ 00:13:07.434 20:11:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:07.434 20:11:32 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:07.434 20:11:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.434 20:11:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.434 20:11:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:07.434 ************************************ 00:13:07.434 START TEST nvmf_vfio_user_fuzz 00:13:07.434 ************************************ 00:13:07.434 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:07.694 * Looking for test storage... 00:13:07.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.694 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4159560 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4159560' 00:13:07.695 Process pid: 4159560 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4159560 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4159560 ']' 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.695 20:11:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:07.954 20:11:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.954 20:11:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:07.954 20:11:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 malloc0 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:08.888 20:11:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:40.979 Fuzzing completed. Shutting down the fuzz application 00:13:40.979 00:13:40.979 Dumping successful admin opcodes: 00:13:40.979 8, 9, 10, 24, 00:13:40.979 Dumping successful io opcodes: 00:13:40.979 0, 00:13:40.979 NS: 0x200003a1ef00 I/O qp, Total commands completed: 683994, total successful commands: 2664, random_seed: 4233409728 00:13:40.979 NS: 0x200003a1ef00 admin qp, Total commands completed: 138878, total successful commands: 1126, random_seed: 1477901376 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4159560 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4159560 ']' 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 4159560 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159560 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159560' 00:13:40.979 killing process with pid 4159560 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 4159560 00:13:40.979 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 4159560 00:13:40.980 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:40.980 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:40.980 00:13:40.980 real 0m32.262s 00:13:40.980 user 0m35.278s 00:13:40.980 sys 0m21.337s 00:13:40.980 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.980 20:12:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:40.980 ************************************ 00:13:40.980 END TEST nvmf_vfio_user_fuzz 00:13:40.980 ************************************ 00:13:40.980 20:12:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:40.980 20:12:05 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:40.980 20:12:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.980 20:12:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.980 20:12:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:40.980 ************************************ 00:13:40.980 START TEST nvmf_host_management 00:13:40.980 ************************************ 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:40.980 * Looking for test storage... 00:13:40.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.980 20:12:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.172 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:45.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:45.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:45.173 Found net devices under 0000:af:00.0: cvl_0_0 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:45.173 Found net devices under 0000:af:00.1: cvl_0_1 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.173 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:45.433 00:13:45.433 --- 10.0.0.2 ping statistics --- 00:13:45.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.433 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:45.433 00:13:45.433 --- 10.0.0.1 ping statistics --- 00:13:45.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.433 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4168976 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4168976 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4168976 ']' 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.433 20:12:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.692 [2024-07-15 20:12:10.811966] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:13:45.692 [2024-07-15 20:12:10.812022] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.692 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.692 [2024-07-15 20:12:10.889316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.692 [2024-07-15 20:12:10.982788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.692 [2024-07-15 20:12:10.982831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.692 [2024-07-15 20:12:10.982841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.692 [2024-07-15 20:12:10.982850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.692 [2024-07-15 20:12:10.982857] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.692 [2024-07-15 20:12:10.982898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.692 [2024-07-15 20:12:10.982987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.692 [2024-07-15 20:12:10.983080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:45.692 [2024-07-15 20:12:10.983081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.951 [2024-07-15 20:12:11.136541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.951 Malloc0 00:13:45.951 [2024-07-15 20:12:11.200461] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4169027 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4169027 /var/tmp/bdevperf.sock 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4169027 ']' 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:45.951 { 00:13:45.951 "params": { 00:13:45.951 "name": "Nvme$subsystem", 00:13:45.951 "trtype": "$TEST_TRANSPORT", 00:13:45.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.951 "adrfam": "ipv4", 00:13:45.951 "trsvcid": "$NVMF_PORT", 00:13:45.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.951 "hdgst": ${hdgst:-false}, 00:13:45.951 "ddgst": ${ddgst:-false} 00:13:45.951 }, 00:13:45.951 "method": "bdev_nvme_attach_controller" 00:13:45.951 } 00:13:45.951 EOF 00:13:45.951 )") 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:45.951 20:12:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:45.951 "params": { 00:13:45.951 "name": "Nvme0", 00:13:45.951 "trtype": "tcp", 00:13:45.951 "traddr": "10.0.0.2", 00:13:45.951 "adrfam": "ipv4", 00:13:45.951 "trsvcid": "4420", 00:13:45.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:45.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:45.951 "hdgst": false, 00:13:45.951 "ddgst": false 00:13:45.951 }, 00:13:45.951 "method": "bdev_nvme_attach_controller" 00:13:45.951 }' 00:13:45.951 [2024-07-15 20:12:11.294855] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:13:45.951 [2024-07-15 20:12:11.294912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169027 ] 00:13:46.210 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.210 [2024-07-15 20:12:11.376028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.210 [2024-07-15 20:12:11.461610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.468 Running I/O for 10 seconds... 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=53 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 53 -ge 100 ']' 00:13:46.727 20:12:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.987 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.987 [2024-07-15 20:12:12.216419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.216986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.987 [2024-07-15 20:12:12.217426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.988 [2024-07-15 20:12:12.217444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.988 [2024-07-15 20:12:12.217462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x702690 is same with the state(5) to be set 00:13:46.988 [2024-07-15 20:12:12.218690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.218989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.218999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.988 [2024-07-15 20:12:12.219670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.988 [2024-07-15 20:12:12.219682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.219978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:46.989 [2024-07-15 20:12:12.220193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.989 [2024-07-15 20:12:12.220221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:13:46.989 [2024-07-15 20:12:12.220283] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa00ea0 was disconnected and freed. reset controller. 00:13:46.989 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.989 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:46.989 [2024-07-15 20:12:12.221618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:46.989 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.989 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:46.989 task offset: 57344 on job bdev=Nvme0n1 fails 00:13:46.989 00:13:46.989 Latency(us) 00:13:46.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.989 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:46.989 Job: Nvme0n1 ended in about 0.43 seconds with error 00:13:46.989 Verification LBA range: start 0x0 length 0x400 00:13:46.989 Nvme0n1 : 0.43 1033.47 64.59 147.64 0.00 52371.99 2323.55 53858.68 00:13:46.989 =================================================================================================================== 00:13:46.989 Total : 1033.47 64.59 147.64 0.00 52371.99 2323.55 53858.68 00:13:46.989 [2024-07-15 20:12:12.223975] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.989 [2024-07-15 20:12:12.223994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cee90 (9): Bad file descriptor 00:13:46.989 20:12:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.989 20:12:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:46.989 [2024-07-15 20:12:12.277919] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4169027 00:13:47.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4169027) - No such process 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:47.925 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:47.925 { 00:13:47.925 "params": { 00:13:47.925 "name": "Nvme$subsystem", 00:13:47.925 "trtype": "$TEST_TRANSPORT", 00:13:47.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.925 "adrfam": "ipv4", 00:13:47.925 "trsvcid": "$NVMF_PORT", 00:13:47.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.926 "hdgst": ${hdgst:-false}, 00:13:47.926 "ddgst": ${ddgst:-false} 00:13:47.926 }, 00:13:47.926 "method": "bdev_nvme_attach_controller" 00:13:47.926 } 00:13:47.926 EOF 00:13:47.926 )") 00:13:47.926 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:47.926 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:47.926 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:47.926 20:12:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:47.926 "params": { 00:13:47.926 "name": "Nvme0", 00:13:47.926 "trtype": "tcp", 00:13:47.926 "traddr": "10.0.0.2", 00:13:47.926 "adrfam": "ipv4", 00:13:47.926 "trsvcid": "4420", 00:13:47.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:47.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:47.926 "hdgst": false, 00:13:47.926 "ddgst": false 00:13:47.926 }, 00:13:47.926 "method": "bdev_nvme_attach_controller" 00:13:47.926 }' 00:13:48.184 [2024-07-15 20:12:13.289810] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:13:48.184 [2024-07-15 20:12:13.289871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169530 ] 00:13:48.184 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.184 [2024-07-15 20:12:13.370394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.184 [2024-07-15 20:12:13.456449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.442 Running I/O for 1 seconds... 00:13:49.377 00:13:49.377 Latency(us) 00:13:49.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.377 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:49.377 Verification LBA range: start 0x0 length 0x400 00:13:49.377 Nvme0n1 : 1.03 1115.74 69.73 0.00 0.00 56311.44 8757.99 52905.43 00:13:49.377 =================================================================================================================== 00:13:49.377 Total : 1115.74 69.73 0.00 0.00 56311.44 8757.99 52905.43 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.635 rmmod nvme_tcp 00:13:49.635 rmmod nvme_fabrics 00:13:49.635 rmmod nvme_keyring 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4168976 ']' 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4168976 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4168976 ']' 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4168976 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4168976 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4168976' 00:13:49.635 killing process with pid 4168976 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4168976 00:13:49.635 20:12:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4168976 00:13:49.940 [2024-07-15 20:12:15.161285] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.940 20:12:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.904 20:12:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:52.163 20:12:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:52.163 00:13:52.163 real 0m12.202s 00:13:52.163 user 0m20.892s 00:13:52.163 sys 0m5.211s 00:13:52.163 20:12:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:52.163 20:12:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:52.163 ************************************ 00:13:52.163 END TEST nvmf_host_management 00:13:52.163 ************************************ 00:13:52.163 20:12:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:52.163 20:12:17 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:52.163 20:12:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:52.163 20:12:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.163 20:12:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:52.163 ************************************ 00:13:52.163 START TEST nvmf_lvol 00:13:52.163 ************************************ 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:52.163 * Looking for test storage... 00:13:52.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.163 20:12:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:52.164 20:12:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:58.729 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:58.729 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:58.729 Found net devices under 0000:af:00.0: cvl_0_0 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:58.729 Found net devices under 0000:af:00.1: cvl_0_1 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:58.729 20:12:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.729 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.729 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.729 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:58.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:13:58.729 00:13:58.729 --- 10.0.0.2 ping statistics --- 00:13:58.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.730 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:13:58.730 00:13:58.730 --- 10.0.0.1 ping statistics --- 00:13:58.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.730 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4173304 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4173304 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4173304 ']' 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.730 20:12:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 [2024-07-15 20:12:23.165190] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:13:58.730 [2024-07-15 20:12:23.165251] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.730 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.730 [2024-07-15 20:12:23.252367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.730 [2024-07-15 20:12:23.342912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.730 [2024-07-15 20:12:23.342956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.730 [2024-07-15 20:12:23.342966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.730 [2024-07-15 20:12:23.342975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.730 [2024-07-15 20:12:23.342982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.730 [2024-07-15 20:12:23.343031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.730 [2024-07-15 20:12:23.343153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.730 [2024-07-15 20:12:23.343155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.988 20:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:59.246 [2024-07-15 20:12:24.379962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.246 20:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.505 20:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:59.505 20:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.764 20:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:59.764 20:12:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:00.022 20:12:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:00.280 20:12:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=48af22d8-dce8-4a74-8496-f21dd4309ada 00:14:00.280 20:12:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 48af22d8-dce8-4a74-8496-f21dd4309ada lvol 20 00:14:00.539 20:12:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1bc9349e-d7d3-4722-b097-b968b14c806d 00:14:00.539 20:12:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.797 20:12:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1bc9349e-d7d3-4722-b097-b968b14c806d 00:14:01.055 20:12:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:01.313 [2024-07-15 20:12:26.452062] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.313 20:12:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.571 20:12:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4174109 00:14:01.571 20:12:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:01.571 20:12:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:01.571 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.502 20:12:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1bc9349e-d7d3-4722-b097-b968b14c806d MY_SNAPSHOT 00:14:02.760 20:12:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4d07897e-3d7f-40e1-a3ca-3206ebc6dba6 00:14:02.760 20:12:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1bc9349e-d7d3-4722-b097-b968b14c806d 30 00:14:03.017 20:12:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4d07897e-3d7f-40e1-a3ca-3206ebc6dba6 MY_CLONE 00:14:03.275 20:12:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=00527d5e-bf45-4983-b6f8-711b95b4e359 00:14:03.275 20:12:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 00527d5e-bf45-4983-b6f8-711b95b4e359 00:14:04.211 20:12:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4174109 00:14:12.327 Initializing NVMe Controllers 00:14:12.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:12.327 Controller IO queue size 128, less than required. 00:14:12.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:12.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:12.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:12.327 Initialization complete. Launching workers. 00:14:12.327 ======================================================== 00:14:12.327 Latency(us) 00:14:12.327 Device Information : IOPS MiB/s Average min max 00:14:12.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9293.50 36.30 13775.65 1764.51 73670.07 00:14:12.327 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9160.60 35.78 13974.83 3543.25 89563.18 00:14:12.327 ======================================================== 00:14:12.327 Total : 18454.10 72.09 13874.52 1764.51 89563.18 00:14:12.327 00:14:12.327 20:12:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:12.327 20:12:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1bc9349e-d7d3-4722-b097-b968b14c806d 00:14:12.327 20:12:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48af22d8-dce8-4a74-8496-f21dd4309ada 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.586 rmmod nvme_tcp 00:14:12.586 rmmod nvme_fabrics 00:14:12.586 rmmod nvme_keyring 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4173304 ']' 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4173304 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4173304 ']' 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4173304 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173304 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173304' 00:14:12.586 killing process with pid 4173304 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4173304 00:14:12.586 20:12:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4173304 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.845 20:12:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.377 00:14:15.377 real 0m22.878s 00:14:15.377 user 1m7.951s 00:14:15.377 sys 0m7.141s 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:15.377 ************************************ 00:14:15.377 END TEST nvmf_lvol 00:14:15.377 ************************************ 00:14:15.377 20:12:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.377 20:12:40 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:15.377 20:12:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.377 20:12:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.377 20:12:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.377 ************************************ 00:14:15.377 START TEST nvmf_lvs_grow 00:14:15.377 ************************************ 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:15.377 * Looking for test storage... 00:14:15.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.377 20:12:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.378 20:12:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.649 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:20.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:20.650 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:20.650 Found net devices under 0000:af:00.0: cvl_0_0 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:20.650 Found net devices under 0000:af:00.1: cvl_0_1 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.650 20:12:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:14:20.908 00:14:20.908 --- 10.0.0.2 ping statistics --- 00:14:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.908 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:14:20.908 00:14:20.908 --- 10.0.0.1 ping statistics --- 00:14:20.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.908 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4179669 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4179669 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4179669 ']' 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.908 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:20.908 [2024-07-15 20:12:46.218650] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:14:20.908 [2024-07-15 20:12:46.218706] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.908 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.165 [2024-07-15 20:12:46.303662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.165 [2024-07-15 20:12:46.390133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.165 [2024-07-15 20:12:46.390179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.165 [2024-07-15 20:12:46.390189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.165 [2024-07-15 20:12:46.390198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.165 [2024-07-15 20:12:46.390205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.165 [2024-07-15 20:12:46.390233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.165 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.165 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:21.165 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.165 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.165 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.424 20:12:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.424 20:12:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.424 [2024-07-15 20:12:46.750678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.424 20:12:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:21.424 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:21.424 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.424 20:12:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.683 ************************************ 00:14:21.683 START TEST lvs_grow_clean 00:14:21.683 ************************************ 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:21.683 20:12:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:21.941 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:21.941 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:22.199 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:22.199 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:22.199 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:22.469 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:22.469 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:22.469 20:12:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c143e15-03af-4e7c-8aac-abda1008fb2f lvol 150 00:14:23.034 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=460527ff-82aa-4455-b4cd-8c069ed80fe9 00:14:23.034 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:23.034 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:23.034 [2024-07-15 20:12:48.306484] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:23.034 [2024-07-15 20:12:48.306555] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:23.035 true 00:14:23.035 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:23.035 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:23.600 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:23.600 20:12:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:24.167 20:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 460527ff-82aa-4455-b4cd-8c069ed80fe9 00:14:24.424 20:12:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:24.683 [2024-07-15 20:12:50.015533] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4180491 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4180491 /var/tmp/bdevperf.sock 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4180491 ']' 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.942 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:25.202 [2024-07-15 20:12:50.328921] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:14:25.202 [2024-07-15 20:12:50.328979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4180491 ] 00:14:25.202 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.202 [2024-07-15 20:12:50.399614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.202 [2024-07-15 20:12:50.491872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.461 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.461 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:25.461 20:12:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:26.028 Nvme0n1 00:14:26.028 20:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:26.287 [ 00:14:26.287 { 00:14:26.287 "name": "Nvme0n1", 00:14:26.287 "aliases": [ 00:14:26.287 "460527ff-82aa-4455-b4cd-8c069ed80fe9" 00:14:26.287 ], 00:14:26.287 "product_name": "NVMe disk", 00:14:26.287 "block_size": 4096, 00:14:26.287 "num_blocks": 38912, 00:14:26.287 "uuid": "460527ff-82aa-4455-b4cd-8c069ed80fe9", 00:14:26.287 "assigned_rate_limits": { 00:14:26.287 "rw_ios_per_sec": 0, 00:14:26.287 "rw_mbytes_per_sec": 0, 00:14:26.287 "r_mbytes_per_sec": 0, 00:14:26.287 "w_mbytes_per_sec": 0 00:14:26.287 }, 00:14:26.287 "claimed": false, 00:14:26.287 "zoned": false, 00:14:26.287 "supported_io_types": { 00:14:26.287 "read": true, 00:14:26.287 "write": true, 00:14:26.287 "unmap": true, 00:14:26.287 "flush": true, 00:14:26.287 "reset": true, 00:14:26.287 "nvme_admin": true, 00:14:26.287 "nvme_io": true, 00:14:26.287 "nvme_io_md": false, 00:14:26.287 "write_zeroes": true, 00:14:26.287 "zcopy": false, 00:14:26.287 "get_zone_info": false, 00:14:26.287 "zone_management": false, 00:14:26.287 "zone_append": false, 00:14:26.287 "compare": true, 00:14:26.287 "compare_and_write": true, 00:14:26.287 "abort": true, 00:14:26.287 "seek_hole": false, 00:14:26.287 "seek_data": false, 00:14:26.287 "copy": true, 00:14:26.287 "nvme_iov_md": false 00:14:26.287 }, 00:14:26.287 "memory_domains": [ 00:14:26.287 { 00:14:26.287 "dma_device_id": "system", 00:14:26.287 "dma_device_type": 1 00:14:26.287 } 00:14:26.287 ], 00:14:26.287 "driver_specific": { 00:14:26.287 "nvme": [ 00:14:26.287 { 00:14:26.287 "trid": { 00:14:26.287 "trtype": "TCP", 00:14:26.287 "adrfam": "IPv4", 00:14:26.287 "traddr": "10.0.0.2", 00:14:26.287 "trsvcid": "4420", 00:14:26.287 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:26.287 }, 00:14:26.287 "ctrlr_data": { 00:14:26.287 "cntlid": 1, 00:14:26.287 "vendor_id": "0x8086", 00:14:26.288 "model_number": "SPDK bdev Controller", 00:14:26.288 "serial_number": "SPDK0", 00:14:26.288 "firmware_revision": "24.09", 00:14:26.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:26.288 "oacs": { 00:14:26.288 "security": 0, 00:14:26.288 "format": 0, 00:14:26.288 "firmware": 0, 00:14:26.288 "ns_manage": 0 00:14:26.288 }, 00:14:26.288 "multi_ctrlr": true, 00:14:26.288 "ana_reporting": false 00:14:26.288 }, 00:14:26.288 "vs": { 00:14:26.288 "nvme_version": "1.3" 00:14:26.288 }, 00:14:26.288 "ns_data": { 00:14:26.288 "id": 1, 00:14:26.288 "can_share": true 00:14:26.288 } 00:14:26.288 } 00:14:26.288 ], 00:14:26.288 "mp_policy": "active_passive" 00:14:26.288 } 00:14:26.288 } 00:14:26.288 ] 00:14:26.288 20:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4180754 00:14:26.288 20:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:26.288 20:12:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:26.546 Running I/O for 10 seconds... 00:14:27.482 Latency(us) 00:14:27.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.482 Nvme0n1 : 1.00 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:14:27.482 =================================================================================================================== 00:14:27.482 Total : 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:14:27.482 00:14:28.441 20:12:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:28.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.441 Nvme0n1 : 2.00 15305.50 59.79 0.00 0.00 0.00 0.00 0.00 00:14:28.441 =================================================================================================================== 00:14:28.441 Total : 15305.50 59.79 0.00 0.00 0.00 0.00 0.00 00:14:28.441 00:14:28.441 true 00:14:28.441 20:12:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:28.441 20:12:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:28.699 20:12:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:28.699 20:12:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:28.699 20:12:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4180754 00:14:29.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.645 Nvme0n1 : 3.00 15368.33 60.03 0.00 0.00 0.00 0.00 0.00 00:14:29.645 =================================================================================================================== 00:14:29.645 Total : 15368.33 60.03 0.00 0.00 0.00 0.00 0.00 00:14:29.645 00:14:30.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.610 Nvme0n1 : 4.00 15416.75 60.22 0.00 0.00 0.00 0.00 0.00 00:14:30.610 =================================================================================================================== 00:14:30.610 Total : 15416.75 60.22 0.00 0.00 0.00 0.00 0.00 00:14:30.610 00:14:31.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.542 Nvme0n1 : 5.00 15445.80 60.34 0.00 0.00 0.00 0.00 0.00 00:14:31.542 =================================================================================================================== 00:14:31.542 Total : 15445.80 60.34 0.00 0.00 0.00 0.00 0.00 00:14:31.542 00:14:32.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.476 Nvme0n1 : 6.00 15475.00 60.45 0.00 0.00 0.00 0.00 0.00 00:14:32.476 =================================================================================================================== 00:14:32.476 Total : 15475.00 60.45 0.00 0.00 0.00 0.00 0.00 00:14:32.476 00:14:33.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.412 Nvme0n1 : 7.00 15487.00 60.50 0.00 0.00 0.00 0.00 0.00 00:14:33.412 =================================================================================================================== 00:14:33.412 Total : 15487.00 60.50 0.00 0.00 0.00 0.00 0.00 00:14:33.412 00:14:34.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.347 Nvme0n1 : 8.00 15503.75 60.56 0.00 0.00 0.00 0.00 0.00 00:14:34.347 =================================================================================================================== 00:14:34.347 Total : 15503.75 60.56 0.00 0.00 0.00 0.00 0.00 00:14:34.347 00:14:35.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.724 Nvme0n1 : 9.00 15517.44 60.62 0.00 0.00 0.00 0.00 0.00 00:14:35.724 =================================================================================================================== 00:14:35.724 Total : 15517.44 60.62 0.00 0.00 0.00 0.00 0.00 00:14:35.724 00:14:36.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.659 Nvme0n1 : 10.00 15527.80 60.66 0.00 0.00 0.00 0.00 0.00 00:14:36.659 =================================================================================================================== 00:14:36.659 Total : 15527.80 60.66 0.00 0.00 0.00 0.00 0.00 00:14:36.659 00:14:36.659 00:14:36.659 Latency(us) 00:14:36.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.659 Nvme0n1 : 10.00 15534.05 60.68 0.00 0.00 8234.77 5123.72 18350.08 00:14:36.659 =================================================================================================================== 00:14:36.659 Total : 15534.05 60.68 0.00 0.00 8234.77 5123.72 18350.08 00:14:36.659 0 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4180491 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4180491 ']' 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4180491 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4180491 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4180491' 00:14:36.659 killing process with pid 4180491 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4180491 00:14:36.659 Received shutdown signal, test time was about 10.000000 seconds 00:14:36.659 00:14:36.659 Latency(us) 00:14:36.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.659 =================================================================================================================== 00:14:36.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4180491 00:14:36.659 20:13:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.226 20:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:37.790 20:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:37.790 20:13:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:38.047 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:38.047 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:38.047 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:38.611 [2024-07-15 20:13:03.679991] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:38.612 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:38.869 request: 00:14:38.869 { 00:14:38.869 "uuid": "3c143e15-03af-4e7c-8aac-abda1008fb2f", 00:14:38.869 "method": "bdev_lvol_get_lvstores", 00:14:38.869 "req_id": 1 00:14:38.869 } 00:14:38.869 Got JSON-RPC error response 00:14:38.869 response: 00:14:38.869 { 00:14:38.869 "code": -19, 00:14:38.869 "message": "No such device" 00:14:38.869 } 00:14:38.869 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:38.869 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.869 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.869 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.869 20:13:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:39.126 aio_bdev 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 460527ff-82aa-4455-b4cd-8c069ed80fe9 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=460527ff-82aa-4455-b4cd-8c069ed80fe9 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:39.126 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:39.691 20:13:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 460527ff-82aa-4455-b4cd-8c069ed80fe9 -t 2000 00:14:39.948 [ 00:14:39.948 { 00:14:39.948 "name": "460527ff-82aa-4455-b4cd-8c069ed80fe9", 00:14:39.948 "aliases": [ 00:14:39.948 "lvs/lvol" 00:14:39.948 ], 00:14:39.948 "product_name": "Logical Volume", 00:14:39.948 "block_size": 4096, 00:14:39.948 "num_blocks": 38912, 00:14:39.948 "uuid": "460527ff-82aa-4455-b4cd-8c069ed80fe9", 00:14:39.948 "assigned_rate_limits": { 00:14:39.948 "rw_ios_per_sec": 0, 00:14:39.948 "rw_mbytes_per_sec": 0, 00:14:39.948 "r_mbytes_per_sec": 0, 00:14:39.948 "w_mbytes_per_sec": 0 00:14:39.948 }, 00:14:39.948 "claimed": false, 00:14:39.948 "zoned": false, 00:14:39.948 "supported_io_types": { 00:14:39.948 "read": true, 00:14:39.948 "write": true, 00:14:39.948 "unmap": true, 00:14:39.948 "flush": false, 00:14:39.948 "reset": true, 00:14:39.948 "nvme_admin": false, 00:14:39.948 "nvme_io": false, 00:14:39.948 "nvme_io_md": false, 00:14:39.948 "write_zeroes": true, 00:14:39.948 "zcopy": false, 00:14:39.948 "get_zone_info": false, 00:14:39.948 "zone_management": false, 00:14:39.948 "zone_append": false, 00:14:39.948 "compare": false, 00:14:39.948 "compare_and_write": false, 00:14:39.948 "abort": false, 00:14:39.948 "seek_hole": true, 00:14:39.948 "seek_data": true, 00:14:39.948 "copy": false, 00:14:39.948 "nvme_iov_md": false 00:14:39.948 }, 00:14:39.948 "driver_specific": { 00:14:39.948 "lvol": { 00:14:39.948 "lvol_store_uuid": "3c143e15-03af-4e7c-8aac-abda1008fb2f", 00:14:39.948 "base_bdev": "aio_bdev", 00:14:39.948 "thin_provision": false, 00:14:39.948 "num_allocated_clusters": 38, 00:14:39.948 "snapshot": false, 00:14:39.948 "clone": false, 00:14:39.948 "esnap_clone": false 00:14:39.948 } 00:14:39.948 } 00:14:39.948 } 00:14:39.948 ] 00:14:39.948 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:39.948 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:39.948 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:40.206 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:40.206 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:40.206 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:40.465 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:40.465 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 460527ff-82aa-4455-b4cd-8c069ed80fe9 00:14:40.723 20:13:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c143e15-03af-4e7c-8aac-abda1008fb2f 00:14:40.982 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.241 00:14:41.241 real 0m19.561s 00:14:41.241 user 0m19.424s 00:14:41.241 sys 0m1.747s 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:41.241 ************************************ 00:14:41.241 END TEST lvs_grow_clean 00:14:41.241 ************************************ 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.241 ************************************ 00:14:41.241 START TEST lvs_grow_dirty 00:14:41.241 ************************************ 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.241 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.500 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:41.500 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:41.759 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:41.759 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:41.759 20:13:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:42.018 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:42.018 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:42.018 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 lvol 150 00:14:42.277 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:14:42.277 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.277 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:42.536 [2024-07-15 20:13:07.671844] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:42.536 [2024-07-15 20:13:07.671910] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:42.536 true 00:14:42.536 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:42.536 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:42.795 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:42.795 20:13:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:43.054 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:14:43.312 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.312 [2024-07-15 20:13:08.622753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.312 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4183885 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4183885 /var/tmp/bdevperf.sock 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4183885 ']' 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.570 20:13:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:43.829 [2024-07-15 20:13:08.931120] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:14:43.829 [2024-07-15 20:13:08.931181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4183885 ] 00:14:43.829 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.829 [2024-07-15 20:13:09.002095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.829 [2024-07-15 20:13:09.092149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.087 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.087 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:44.087 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:44.345 Nvme0n1 00:14:44.345 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:44.604 [ 00:14:44.604 { 00:14:44.604 "name": "Nvme0n1", 00:14:44.604 "aliases": [ 00:14:44.604 "c2b9bb55-543e-4de3-8d6d-14c204d2e099" 00:14:44.604 ], 00:14:44.604 "product_name": "NVMe disk", 00:14:44.604 "block_size": 4096, 00:14:44.604 "num_blocks": 38912, 00:14:44.604 "uuid": "c2b9bb55-543e-4de3-8d6d-14c204d2e099", 00:14:44.604 "assigned_rate_limits": { 00:14:44.604 "rw_ios_per_sec": 0, 00:14:44.604 "rw_mbytes_per_sec": 0, 00:14:44.604 "r_mbytes_per_sec": 0, 00:14:44.604 "w_mbytes_per_sec": 0 00:14:44.604 }, 00:14:44.604 "claimed": false, 00:14:44.604 "zoned": false, 00:14:44.604 "supported_io_types": { 00:14:44.604 "read": true, 00:14:44.604 "write": true, 00:14:44.604 "unmap": true, 00:14:44.604 "flush": true, 00:14:44.604 "reset": true, 00:14:44.604 "nvme_admin": true, 00:14:44.605 "nvme_io": true, 00:14:44.605 "nvme_io_md": false, 00:14:44.605 "write_zeroes": true, 00:14:44.605 "zcopy": false, 00:14:44.605 "get_zone_info": false, 00:14:44.605 "zone_management": false, 00:14:44.605 "zone_append": false, 00:14:44.605 "compare": true, 00:14:44.605 "compare_and_write": true, 00:14:44.605 "abort": true, 00:14:44.605 "seek_hole": false, 00:14:44.605 "seek_data": false, 00:14:44.605 "copy": true, 00:14:44.605 "nvme_iov_md": false 00:14:44.605 }, 00:14:44.605 "memory_domains": [ 00:14:44.605 { 00:14:44.605 "dma_device_id": "system", 00:14:44.605 "dma_device_type": 1 00:14:44.605 } 00:14:44.605 ], 00:14:44.605 "driver_specific": { 00:14:44.605 "nvme": [ 00:14:44.605 { 00:14:44.605 "trid": { 00:14:44.605 "trtype": "TCP", 00:14:44.605 "adrfam": "IPv4", 00:14:44.605 "traddr": "10.0.0.2", 00:14:44.605 "trsvcid": "4420", 00:14:44.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:44.605 }, 00:14:44.605 "ctrlr_data": { 00:14:44.605 "cntlid": 1, 00:14:44.605 "vendor_id": "0x8086", 00:14:44.605 "model_number": "SPDK bdev Controller", 00:14:44.605 "serial_number": "SPDK0", 00:14:44.605 "firmware_revision": "24.09", 00:14:44.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:44.605 "oacs": { 00:14:44.605 "security": 0, 00:14:44.605 "format": 0, 00:14:44.605 "firmware": 0, 00:14:44.605 "ns_manage": 0 00:14:44.605 }, 00:14:44.605 "multi_ctrlr": true, 00:14:44.605 "ana_reporting": false 00:14:44.605 }, 00:14:44.605 "vs": { 00:14:44.605 "nvme_version": "1.3" 00:14:44.605 }, 00:14:44.605 "ns_data": { 00:14:44.605 "id": 1, 00:14:44.605 "can_share": true 00:14:44.605 } 00:14:44.605 } 00:14:44.605 ], 00:14:44.605 "mp_policy": "active_passive" 00:14:44.605 } 00:14:44.605 } 00:14:44.605 ] 00:14:44.605 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4183968 00:14:44.605 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:44.605 20:13:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:44.864 Running I/O for 10 seconds... 00:14:45.800 Latency(us) 00:14:45.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.800 Nvme0n1 : 1.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:14:45.800 =================================================================================================================== 00:14:45.800 Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:14:45.800 00:14:46.735 20:13:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:46.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.735 Nvme0n1 : 2.00 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:14:46.735 =================================================================================================================== 00:14:46.735 Total : 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:14:46.735 00:14:46.994 true 00:14:46.994 20:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:46.994 20:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:47.252 20:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:47.252 20:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:47.252 20:13:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4183968 00:14:47.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.820 Nvme0n1 : 3.00 15473.67 60.44 0.00 0.00 0.00 0.00 0.00 00:14:47.820 =================================================================================================================== 00:14:47.820 Total : 15473.67 60.44 0.00 0.00 0.00 0.00 0.00 00:14:47.820 00:14:48.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.755 Nvme0n1 : 4.00 15483.00 60.48 0.00 0.00 0.00 0.00 0.00 00:14:48.755 =================================================================================================================== 00:14:48.755 Total : 15483.00 60.48 0.00 0.00 0.00 0.00 0.00 00:14:48.755 00:14:49.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.692 Nvme0n1 : 5.00 15510.60 60.59 0.00 0.00 0.00 0.00 0.00 00:14:49.692 =================================================================================================================== 00:14:49.692 Total : 15510.60 60.59 0.00 0.00 0.00 0.00 0.00 00:14:49.692 00:14:51.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.069 Nvme0n1 : 6.00 15529.33 60.66 0.00 0.00 0.00 0.00 0.00 00:14:51.069 =================================================================================================================== 00:14:51.069 Total : 15529.33 60.66 0.00 0.00 0.00 0.00 0.00 00:14:51.069 00:14:52.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.007 Nvme0n1 : 7.00 15551.57 60.75 0.00 0.00 0.00 0.00 0.00 00:14:52.007 =================================================================================================================== 00:14:52.007 Total : 15551.57 60.75 0.00 0.00 0.00 0.00 0.00 00:14:52.007 00:14:52.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.945 Nvme0n1 : 8.00 15560.25 60.78 0.00 0.00 0.00 0.00 0.00 00:14:52.945 =================================================================================================================== 00:14:52.945 Total : 15560.25 60.78 0.00 0.00 0.00 0.00 0.00 00:14:52.945 00:14:53.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.883 Nvme0n1 : 9.00 15567.00 60.81 0.00 0.00 0.00 0.00 0.00 00:14:53.883 =================================================================================================================== 00:14:53.883 Total : 15567.00 60.81 0.00 0.00 0.00 0.00 0.00 00:14:53.883 00:14:54.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.821 Nvme0n1 : 10.00 15579.20 60.86 0.00 0.00 0.00 0.00 0.00 00:14:54.821 =================================================================================================================== 00:14:54.821 Total : 15579.20 60.86 0.00 0.00 0.00 0.00 0.00 00:14:54.821 00:14:54.821 00:14:54.821 Latency(us) 00:14:54.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.821 Nvme0n1 : 10.01 15577.55 60.85 0.00 0.00 8212.53 2487.39 14537.08 00:14:54.821 =================================================================================================================== 00:14:54.821 Total : 15577.55 60.85 0.00 0.00 8212.53 2487.39 14537.08 00:14:54.821 0 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4183885 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4183885 ']' 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4183885 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4183885 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4183885' 00:14:54.821 killing process with pid 4183885 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4183885 00:14:54.821 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.821 00:14:54.821 Latency(us) 00:14:54.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.821 =================================================================================================================== 00:14:54.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.821 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4183885 00:14:55.079 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.337 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:55.595 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:55.595 20:13:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4179669 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4179669 00:14:55.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4179669 Killed "${NVMF_APP[@]}" "$@" 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4186060 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4186060 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4186060 ']' 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.853 20:13:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:55.853 [2024-07-15 20:13:21.165129] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:14:55.853 [2024-07-15 20:13:21.165196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.853 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.112 [2024-07-15 20:13:21.252426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.112 [2024-07-15 20:13:21.340798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.112 [2024-07-15 20:13:21.340839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.112 [2024-07-15 20:13:21.340849] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.112 [2024-07-15 20:13:21.340863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.112 [2024-07-15 20:13:21.340870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.112 [2024-07-15 20:13:21.340897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.047 [2024-07-15 20:13:22.365862] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:57.047 [2024-07-15 20:13:22.365966] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:57.047 [2024-07-15 20:13:22.366003] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:57.047 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:57.310 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2b9bb55-543e-4de3-8d6d-14c204d2e099 -t 2000 00:14:57.571 [ 00:14:57.571 { 00:14:57.571 "name": "c2b9bb55-543e-4de3-8d6d-14c204d2e099", 00:14:57.571 "aliases": [ 00:14:57.571 "lvs/lvol" 00:14:57.571 ], 00:14:57.571 "product_name": "Logical Volume", 00:14:57.571 "block_size": 4096, 00:14:57.571 "num_blocks": 38912, 00:14:57.571 "uuid": "c2b9bb55-543e-4de3-8d6d-14c204d2e099", 00:14:57.571 "assigned_rate_limits": { 00:14:57.571 "rw_ios_per_sec": 0, 00:14:57.571 "rw_mbytes_per_sec": 0, 00:14:57.571 "r_mbytes_per_sec": 0, 00:14:57.571 "w_mbytes_per_sec": 0 00:14:57.571 }, 00:14:57.571 "claimed": false, 00:14:57.571 "zoned": false, 00:14:57.571 "supported_io_types": { 00:14:57.571 "read": true, 00:14:57.571 "write": true, 00:14:57.571 "unmap": true, 00:14:57.571 "flush": false, 00:14:57.571 "reset": true, 00:14:57.571 "nvme_admin": false, 00:14:57.571 "nvme_io": false, 00:14:57.571 "nvme_io_md": false, 00:14:57.571 "write_zeroes": true, 00:14:57.571 "zcopy": false, 00:14:57.571 "get_zone_info": false, 00:14:57.571 "zone_management": false, 00:14:57.571 "zone_append": false, 00:14:57.571 "compare": false, 00:14:57.571 "compare_and_write": false, 00:14:57.571 "abort": false, 00:14:57.571 "seek_hole": true, 00:14:57.571 "seek_data": true, 00:14:57.571 "copy": false, 00:14:57.571 "nvme_iov_md": false 00:14:57.571 }, 00:14:57.571 "driver_specific": { 00:14:57.571 "lvol": { 00:14:57.571 "lvol_store_uuid": "84448636-9e65-4b1a-9945-7c4d3b0a8e99", 00:14:57.571 "base_bdev": "aio_bdev", 00:14:57.571 "thin_provision": false, 00:14:57.571 "num_allocated_clusters": 38, 00:14:57.571 "snapshot": false, 00:14:57.571 "clone": false, 00:14:57.571 "esnap_clone": false 00:14:57.571 } 00:14:57.571 } 00:14:57.571 } 00:14:57.571 ] 00:14:57.571 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:57.571 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:57.571 20:13:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:57.828 20:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:57.828 20:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:57.828 20:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:58.394 20:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:58.394 20:13:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:59.026 [2024-07-15 20:13:24.096302] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.026 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:59.027 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:59.333 request: 00:14:59.333 { 00:14:59.333 "uuid": "84448636-9e65-4b1a-9945-7c4d3b0a8e99", 00:14:59.333 "method": "bdev_lvol_get_lvstores", 00:14:59.333 "req_id": 1 00:14:59.333 } 00:14:59.333 Got JSON-RPC error response 00:14:59.333 response: 00:14:59.333 { 00:14:59.333 "code": -19, 00:14:59.333 "message": "No such device" 00:14:59.333 } 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:59.333 aio_bdev 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:59.333 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:59.591 20:13:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2b9bb55-543e-4de3-8d6d-14c204d2e099 -t 2000 00:14:59.850 [ 00:14:59.850 { 00:14:59.850 "name": "c2b9bb55-543e-4de3-8d6d-14c204d2e099", 00:14:59.850 "aliases": [ 00:14:59.850 "lvs/lvol" 00:14:59.850 ], 00:14:59.850 "product_name": "Logical Volume", 00:14:59.850 "block_size": 4096, 00:14:59.850 "num_blocks": 38912, 00:14:59.850 "uuid": "c2b9bb55-543e-4de3-8d6d-14c204d2e099", 00:14:59.850 "assigned_rate_limits": { 00:14:59.850 "rw_ios_per_sec": 0, 00:14:59.850 "rw_mbytes_per_sec": 0, 00:14:59.850 "r_mbytes_per_sec": 0, 00:14:59.850 "w_mbytes_per_sec": 0 00:14:59.850 }, 00:14:59.850 "claimed": false, 00:14:59.850 "zoned": false, 00:14:59.850 "supported_io_types": { 00:14:59.850 "read": true, 00:14:59.850 "write": true, 00:14:59.850 "unmap": true, 00:14:59.850 "flush": false, 00:14:59.850 "reset": true, 00:14:59.850 "nvme_admin": false, 00:14:59.850 "nvme_io": false, 00:14:59.850 "nvme_io_md": false, 00:14:59.850 "write_zeroes": true, 00:14:59.850 "zcopy": false, 00:14:59.850 "get_zone_info": false, 00:14:59.850 "zone_management": false, 00:14:59.850 "zone_append": false, 00:14:59.850 "compare": false, 00:14:59.850 "compare_and_write": false, 00:14:59.850 "abort": false, 00:14:59.850 "seek_hole": true, 00:14:59.850 "seek_data": true, 00:14:59.850 "copy": false, 00:14:59.850 "nvme_iov_md": false 00:14:59.850 }, 00:14:59.850 "driver_specific": { 00:14:59.850 "lvol": { 00:14:59.850 "lvol_store_uuid": "84448636-9e65-4b1a-9945-7c4d3b0a8e99", 00:14:59.850 "base_bdev": "aio_bdev", 00:14:59.850 "thin_provision": false, 00:14:59.850 "num_allocated_clusters": 38, 00:14:59.850 "snapshot": false, 00:14:59.850 "clone": false, 00:14:59.850 "esnap_clone": false 00:14:59.850 } 00:14:59.850 } 00:14:59.850 } 00:14:59.850 ] 00:14:59.850 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:59.850 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:14:59.850 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:00.109 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:00.109 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:15:00.109 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:00.367 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:00.367 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2b9bb55-543e-4de3-8d6d-14c204d2e099 00:15:00.647 20:13:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84448636-9e65-4b1a-9945-7c4d3b0a8e99 00:15:00.906 20:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:01.165 00:15:01.165 real 0m19.925s 00:15:01.165 user 0m51.936s 00:15:01.165 sys 0m3.397s 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:01.165 ************************************ 00:15:01.165 END TEST lvs_grow_dirty 00:15:01.165 ************************************ 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:01.165 nvmf_trace.0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.165 rmmod nvme_tcp 00:15:01.165 rmmod nvme_fabrics 00:15:01.165 rmmod nvme_keyring 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4186060 ']' 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4186060 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4186060 ']' 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4186060 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:01.165 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4186060 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4186060' 00:15:01.424 killing process with pid 4186060 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4186060 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4186060 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.424 20:13:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.958 20:13:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.958 00:15:03.958 real 0m48.561s 00:15:03.958 user 1m18.964s 00:15:03.958 sys 0m9.901s 00:15:03.958 20:13:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.958 20:13:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:03.958 ************************************ 00:15:03.958 END TEST nvmf_lvs_grow 00:15:03.958 ************************************ 00:15:03.958 20:13:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.958 20:13:28 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:03.958 20:13:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.958 20:13:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.958 20:13:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.958 ************************************ 00:15:03.958 START TEST nvmf_bdev_io_wait 00:15:03.958 ************************************ 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:03.958 * Looking for test storage... 00:15:03.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.958 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.959 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.959 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.959 20:13:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:03.959 20:13:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:09.229 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:09.229 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:09.229 Found net devices under 0000:af:00.0: cvl_0_0 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.229 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:09.230 Found net devices under 0000:af:00.1: cvl_0_1 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:15:09.230 00:15:09.230 --- 10.0.0.2 ping statistics --- 00:15:09.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.230 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:15:09.230 00:15:09.230 --- 10.0.0.1 ping statistics --- 00:15:09.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.230 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.230 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4190640 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4190640 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4190640 ']' 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.489 20:13:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.489 [2024-07-15 20:13:34.661655] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:09.489 [2024-07-15 20:13:34.661708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.489 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.489 [2024-07-15 20:13:34.748249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.489 [2024-07-15 20:13:34.840763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.489 [2024-07-15 20:13:34.840807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.489 [2024-07-15 20:13:34.840818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.489 [2024-07-15 20:13:34.840826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.489 [2024-07-15 20:13:34.840833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.489 [2024-07-15 20:13:34.840882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.748 [2024-07-15 20:13:34.840981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.748 [2024-07-15 20:13:34.841073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.748 [2024-07-15 20:13:34.841076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.315 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 [2024-07-15 20:13:35.723750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 Malloc0 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:10.574 [2024-07-15 20:13:35.790691] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4190918 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4190920 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.574 { 00:15:10.574 "params": { 00:15:10.574 "name": "Nvme$subsystem", 00:15:10.574 "trtype": "$TEST_TRANSPORT", 00:15:10.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.574 "adrfam": "ipv4", 00:15:10.574 "trsvcid": "$NVMF_PORT", 00:15:10.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.574 "hdgst": ${hdgst:-false}, 00:15:10.574 "ddgst": ${ddgst:-false} 00:15:10.574 }, 00:15:10.574 "method": "bdev_nvme_attach_controller" 00:15:10.574 } 00:15:10.574 EOF 00:15:10.574 )") 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4190922 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.574 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4190925 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.575 { 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme$subsystem", 00:15:10.575 "trtype": "$TEST_TRANSPORT", 00:15:10.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "$NVMF_PORT", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.575 "hdgst": ${hdgst:-false}, 00:15:10.575 "ddgst": ${ddgst:-false} 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 } 00:15:10.575 EOF 00:15:10.575 )") 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.575 { 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme$subsystem", 00:15:10.575 "trtype": "$TEST_TRANSPORT", 00:15:10.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "$NVMF_PORT", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.575 "hdgst": ${hdgst:-false}, 00:15:10.575 "ddgst": ${ddgst:-false} 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 } 00:15:10.575 EOF 00:15:10.575 )") 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:10.575 { 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme$subsystem", 00:15:10.575 "trtype": "$TEST_TRANSPORT", 00:15:10.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "$NVMF_PORT", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.575 "hdgst": ${hdgst:-false}, 00:15:10.575 "ddgst": ${ddgst:-false} 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 } 00:15:10.575 EOF 00:15:10.575 )") 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4190918 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme1", 00:15:10.575 "trtype": "tcp", 00:15:10.575 "traddr": "10.0.0.2", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "4420", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.575 "hdgst": false, 00:15:10.575 "ddgst": false 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 }' 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme1", 00:15:10.575 "trtype": "tcp", 00:15:10.575 "traddr": "10.0.0.2", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "4420", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.575 "hdgst": false, 00:15:10.575 "ddgst": false 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 }' 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme1", 00:15:10.575 "trtype": "tcp", 00:15:10.575 "traddr": "10.0.0.2", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "4420", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.575 "hdgst": false, 00:15:10.575 "ddgst": false 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 }' 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:10.575 20:13:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:10.575 "params": { 00:15:10.575 "name": "Nvme1", 00:15:10.575 "trtype": "tcp", 00:15:10.575 "traddr": "10.0.0.2", 00:15:10.575 "adrfam": "ipv4", 00:15:10.575 "trsvcid": "4420", 00:15:10.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.575 "hdgst": false, 00:15:10.575 "ddgst": false 00:15:10.575 }, 00:15:10.575 "method": "bdev_nvme_attach_controller" 00:15:10.575 }' 00:15:10.575 [2024-07-15 20:13:35.843765] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:10.575 [2024-07-15 20:13:35.843827] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:10.575 [2024-07-15 20:13:35.846268] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:10.575 [2024-07-15 20:13:35.846323] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:10.575 [2024-07-15 20:13:35.848380] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:10.575 [2024-07-15 20:13:35.848435] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:10.575 [2024-07-15 20:13:35.849305] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:10.575 [2024-07-15 20:13:35.849357] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:10.575 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.834 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.834 [2024-07-15 20:13:36.068178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.834 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.834 [2024-07-15 20:13:36.135379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.834 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.093 [2024-07-15 20:13:36.193544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.093 [2024-07-15 20:13:36.209415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:11.093 [2024-07-15 20:13:36.242538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:11.093 [2024-07-15 20:13:36.249238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.093 [2024-07-15 20:13:36.283158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:11.093 [2024-07-15 20:13:36.338057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:11.093 Running I/O for 1 seconds... 00:15:11.093 Running I/O for 1 seconds... 00:15:11.351 Running I/O for 1 seconds... 00:15:11.351 Running I/O for 1 seconds... 00:15:12.286 00:15:12.286 Latency(us) 00:15:12.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.286 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:12.286 Nvme1n1 : 1.01 10327.51 40.34 0.00 0.00 12337.01 6672.76 19660.80 00:15:12.286 =================================================================================================================== 00:15:12.286 Total : 10327.51 40.34 0.00 0.00 12337.01 6672.76 19660.80 00:15:12.286 00:15:12.286 Latency(us) 00:15:12.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.286 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:12.286 Nvme1n1 : 1.02 4885.33 19.08 0.00 0.00 25843.62 12332.68 43134.60 00:15:12.286 =================================================================================================================== 00:15:12.286 Total : 4885.33 19.08 0.00 0.00 25843.62 12332.68 43134.60 00:15:12.286 00:15:12.286 Latency(us) 00:15:12.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.286 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:12.286 Nvme1n1 : 1.00 163357.51 638.12 0.00 0.00 780.38 314.65 942.08 00:15:12.286 =================================================================================================================== 00:15:12.286 Total : 163357.51 638.12 0.00 0.00 780.38 314.65 942.08 00:15:12.286 00:15:12.286 Latency(us) 00:15:12.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.286 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:12.286 Nvme1n1 : 1.01 5345.15 20.88 0.00 0.00 23863.71 6285.50 58386.62 00:15:12.286 =================================================================================================================== 00:15:12.286 Total : 5345.15 20.88 0.00 0.00 23863.71 6285.50 58386.62 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4190920 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4190922 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4190925 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.545 rmmod nvme_tcp 00:15:12.545 rmmod nvme_fabrics 00:15:12.545 rmmod nvme_keyring 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4190640 ']' 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4190640 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4190640 ']' 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4190640 00:15:12.545 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4190640 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4190640' 00:15:12.804 killing process with pid 4190640 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4190640 00:15:12.804 20:13:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4190640 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.804 20:13:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.374 20:13:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.374 00:15:15.374 real 0m11.314s 00:15:15.374 user 0m20.481s 00:15:15.374 sys 0m5.991s 00:15:15.374 20:13:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.374 20:13:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.374 ************************************ 00:15:15.374 END TEST nvmf_bdev_io_wait 00:15:15.374 ************************************ 00:15:15.374 20:13:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:15.374 20:13:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:15.374 20:13:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:15.374 20:13:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.374 20:13:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:15.374 ************************************ 00:15:15.374 START TEST nvmf_queue_depth 00:15:15.374 ************************************ 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:15.374 * Looking for test storage... 00:15:15.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.374 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.375 20:13:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:20.673 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:20.673 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:20.673 Found net devices under 0000:af:00.0: cvl_0_0 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:20.673 Found net devices under 0000:af:00.1: cvl_0_1 00:15:20.673 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:20.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:15:20.674 00:15:20.674 --- 10.0.0.2 ping statistics --- 00:15:20.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.674 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:20.674 00:15:20.674 --- 10.0.0.1 ping statistics --- 00:15:20.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.674 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1270 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1270 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1270 ']' 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.674 20:13:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:20.674 [2024-07-15 20:13:45.984832] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:20.674 [2024-07-15 20:13:45.984890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.674 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.934 [2024-07-15 20:13:46.061267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.934 [2024-07-15 20:13:46.151405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.934 [2024-07-15 20:13:46.151446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.934 [2024-07-15 20:13:46.151456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.934 [2024-07-15 20:13:46.151465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.934 [2024-07-15 20:13:46.151473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.934 [2024-07-15 20:13:46.151494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.934 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.934 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:20.934 20:13:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.934 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.934 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 [2024-07-15 20:13:46.290482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 Malloc0 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 [2024-07-15 20:13:46.350036] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1294 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1294 /var/tmp/bdevperf.sock 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1294 ']' 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.194 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.194 [2024-07-15 20:13:46.403127] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:21.194 [2024-07-15 20:13:46.403181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294 ] 00:15:21.194 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.194 [2024-07-15 20:13:46.485756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.453 [2024-07-15 20:13:46.576695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.453 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.453 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:21.454 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:21.454 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.454 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.454 NVMe0n1 00:15:21.454 20:13:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.454 20:13:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.713 Running I/O for 10 seconds... 00:15:31.694 00:15:31.694 Latency(us) 00:15:31.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.694 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:31.694 Verification LBA range: start 0x0 length 0x4000 00:15:31.694 NVMe0n1 : 10.11 8180.96 31.96 0.00 0.00 124575.29 29550.78 79596.45 00:15:31.694 =================================================================================================================== 00:15:31.694 Total : 8180.96 31.96 0.00 0.00 124575.29 29550.78 79596.45 00:15:31.694 0 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1294 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1294 ']' 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1294 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1294 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1294' 00:15:31.952 killing process with pid 1294 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1294 00:15:31.952 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.952 00:15:31.952 Latency(us) 00:15:31.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.952 =================================================================================================================== 00:15:31.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1294 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.952 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:32.210 rmmod nvme_tcp 00:15:32.210 rmmod nvme_fabrics 00:15:32.210 rmmod nvme_keyring 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1270 ']' 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1270 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1270 ']' 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1270 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1270 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1270' 00:15:32.210 killing process with pid 1270 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1270 00:15:32.210 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1270 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.470 20:13:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.378 20:13:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.378 00:15:34.378 real 0m19.413s 00:15:34.378 user 0m23.443s 00:15:34.378 sys 0m5.538s 00:15:34.379 20:13:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.379 20:13:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:34.379 ************************************ 00:15:34.379 END TEST nvmf_queue_depth 00:15:34.379 ************************************ 00:15:34.379 20:13:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.379 20:13:59 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:34.379 20:13:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.379 20:13:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.379 20:13:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 ************************************ 00:15:34.637 START TEST nvmf_target_multipath 00:15:34.637 ************************************ 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:34.637 * Looking for test storage... 00:15:34.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.637 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.638 20:13:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:39.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:39.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:39.914 Found net devices under 0000:af:00.0: cvl_0_0 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:39.914 Found net devices under 0000:af:00.1: cvl_0_1 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:39.914 00:15:39.914 --- 10.0.0.2 ping statistics --- 00:15:39.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.914 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:15:39.914 00:15:39.914 --- 10.0.0.1 ping statistics --- 00:15:39.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.914 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:39.914 only one NIC for nvmf test 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.914 20:14:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.914 rmmod nvme_tcp 00:15:39.914 rmmod nvme_fabrics 00:15:39.914 rmmod nvme_keyring 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.914 20:14:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.818 00:15:41.818 real 0m7.337s 00:15:41.818 user 0m1.354s 00:15:41.818 sys 0m3.862s 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.818 20:14:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:41.818 ************************************ 00:15:41.818 END TEST nvmf_target_multipath 00:15:41.818 ************************************ 00:15:41.818 20:14:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:41.818 20:14:07 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.818 20:14:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:41.818 20:14:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.818 20:14:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.818 ************************************ 00:15:41.818 START TEST nvmf_zcopy 00:15:41.818 ************************************ 00:15:41.818 20:14:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:42.077 * Looking for test storage... 00:15:42.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.077 20:14:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.077 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:42.077 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.077 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:42.078 20:14:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:47.455 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:47.455 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:47.455 Found net devices under 0000:af:00.0: cvl_0_0 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:47.455 Found net devices under 0000:af:00.1: cvl_0_1 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.455 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:47.714 00:15:47.714 --- 10.0.0.2 ping statistics --- 00:15:47.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.714 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:15:47.714 00:15:47.714 --- 10.0.0.1 ping statistics --- 00:15:47.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.714 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=10553 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 10553 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 10553 ']' 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.714 20:14:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:47.714 [2024-07-15 20:14:13.006285] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:47.714 [2024-07-15 20:14:13.006339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.714 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.973 [2024-07-15 20:14:13.082208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.973 [2024-07-15 20:14:13.172162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.973 [2024-07-15 20:14:13.172203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.973 [2024-07-15 20:14:13.172212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.973 [2024-07-15 20:14:13.172221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.973 [2024-07-15 20:14:13.172229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.973 [2024-07-15 20:14:13.172261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:47.973 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:47.974 [2024-07-15 20:14:13.310974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.974 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.232 [2024-07-15 20:14:13.327120] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.232 malloc0 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:48.232 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:48.232 { 00:15:48.232 "params": { 00:15:48.233 "name": "Nvme$subsystem", 00:15:48.233 "trtype": "$TEST_TRANSPORT", 00:15:48.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:48.233 "adrfam": "ipv4", 00:15:48.233 "trsvcid": "$NVMF_PORT", 00:15:48.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:48.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:48.233 "hdgst": ${hdgst:-false}, 00:15:48.233 "ddgst": ${ddgst:-false} 00:15:48.233 }, 00:15:48.233 "method": "bdev_nvme_attach_controller" 00:15:48.233 } 00:15:48.233 EOF 00:15:48.233 )") 00:15:48.233 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:48.233 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:48.233 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:48.233 20:14:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:48.233 "params": { 00:15:48.233 "name": "Nvme1", 00:15:48.233 "trtype": "tcp", 00:15:48.233 "traddr": "10.0.0.2", 00:15:48.233 "adrfam": "ipv4", 00:15:48.233 "trsvcid": "4420", 00:15:48.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:48.233 "hdgst": false, 00:15:48.233 "ddgst": false 00:15:48.233 }, 00:15:48.233 "method": "bdev_nvme_attach_controller" 00:15:48.233 }' 00:15:48.233 [2024-07-15 20:14:13.386463] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:48.233 [2024-07-15 20:14:13.386502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid10579 ] 00:15:48.233 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.233 [2024-07-15 20:14:13.456907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.233 [2024-07-15 20:14:13.544637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.800 Running I/O for 10 seconds... 00:15:58.779 00:15:58.779 Latency(us) 00:15:58.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.779 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:58.779 Verification LBA range: start 0x0 length 0x1000 00:15:58.779 Nvme1n1 : 10.01 5772.75 45.10 0.00 0.00 22100.60 599.51 32887.16 00:15:58.779 =================================================================================================================== 00:15:58.779 Total : 5772.75 45.10 0.00 0.00 22100.60 599.51 32887.16 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=12605 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:58.779 { 00:15:58.779 "params": { 00:15:58.779 "name": "Nvme$subsystem", 00:15:58.779 "trtype": "$TEST_TRANSPORT", 00:15:58.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.779 "adrfam": "ipv4", 00:15:58.779 "trsvcid": "$NVMF_PORT", 00:15:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.779 "hdgst": ${hdgst:-false}, 00:15:58.779 "ddgst": ${ddgst:-false} 00:15:58.779 }, 00:15:58.779 "method": "bdev_nvme_attach_controller" 00:15:58.779 } 00:15:58.779 EOF 00:15:58.779 )") 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:58.779 [2024-07-15 20:14:24.115548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.779 [2024-07-15 20:14:24.115586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:58.779 20:14:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:58.779 "params": { 00:15:58.779 "name": "Nvme1", 00:15:58.779 "trtype": "tcp", 00:15:58.779 "traddr": "10.0.0.2", 00:15:58.779 "adrfam": "ipv4", 00:15:58.779 "trsvcid": "4420", 00:15:58.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.779 "hdgst": false, 00:15:58.779 "ddgst": false 00:15:58.779 }, 00:15:58.779 "method": "bdev_nvme_attach_controller" 00:15:58.779 }' 00:15:58.779 [2024-07-15 20:14:24.127547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.779 [2024-07-15 20:14:24.127563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 [2024-07-15 20:14:24.135565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.038 [2024-07-15 20:14:24.135580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 [2024-07-15 20:14:24.143587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.038 [2024-07-15 20:14:24.143601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 [2024-07-15 20:14:24.151607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.038 [2024-07-15 20:14:24.151620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 [2024-07-15 20:14:24.159054] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:15:59.038 [2024-07-15 20:14:24.159119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12605 ] 00:15:59.038 [2024-07-15 20:14:24.163643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.038 [2024-07-15 20:14:24.163658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 [2024-07-15 20:14:24.175676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.038 [2024-07-15 20:14:24.175695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 [2024-07-15 20:14:24.187712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.038 [2024-07-15 20:14:24.187726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.038 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.039 [2024-07-15 20:14:24.199746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.199760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.211777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.211790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.223811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.223824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.235842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.235855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.241205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.039 [2024-07-15 20:14:24.247880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.247895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.259911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.259926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.271943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.271957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.283981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.284001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.296012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.296030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.308045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.308057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.320075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.320089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.326776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.039 [2024-07-15 20:14:24.332110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.332124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.344150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.344171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.356178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.356194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.368209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.368224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.039 [2024-07-15 20:14:24.380239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.039 [2024-07-15 20:14:24.380252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.392282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.392304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.404310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.404324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.416356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.416379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.428391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.428409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.440426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.440445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.452460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.452478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.464490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.464508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.476528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.476550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 Running I/O for 5 seconds... 00:15:59.298 [2024-07-15 20:14:24.488552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.488567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.501242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.501274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.517601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.517625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.534219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.534242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.551501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.551526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.567513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.567536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.585294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.585317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.601924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.601948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.617804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.617828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.628641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.628664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.298 [2024-07-15 20:14:24.643918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.298 [2024-07-15 20:14:24.643940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.661325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.661353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.677485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.677508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.687589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.687611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.704085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.704107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.719777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.719800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.736064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.736087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.753559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.753583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.770208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.557 [2024-07-15 20:14:24.770232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.557 [2024-07-15 20:14:24.787074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.787098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.803251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.803279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.820695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.820718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.836413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.836436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.847223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.847246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.862633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.862655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.880280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.880303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.558 [2024-07-15 20:14:24.896040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.558 [2024-07-15 20:14:24.896061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:24.913208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:24.913231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:24.929573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:24.929596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:24.946715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:24.946740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:24.962842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:24.962865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:24.975513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:24.975536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:24.993570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:24.993594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.009182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.009205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.020293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.020316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.035371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.035394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.052326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.052349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.069676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.069700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.084790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.084812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.100934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.100956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.116655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.116679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.134168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.134192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.150345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.150368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.817 [2024-07-15 20:14:25.166087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.817 [2024-07-15 20:14:25.166110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.176847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.176870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.192619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.192643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.208945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.208969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.224758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.224782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.241699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.241723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.256817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.256841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.274102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.274126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.289941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.289965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.306327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.306351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.316118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.316141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.331278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.331302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.342249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.342279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.357567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.357590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.372809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.372833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.390135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.390158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.406177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.406199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.076 [2024-07-15 20:14:25.423415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.076 [2024-07-15 20:14:25.423438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.439907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.439931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.457009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.457034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.473211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.473234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.490766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.490789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.506724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.506748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.524194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.524218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.539720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.539743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.556930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.556954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.573215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.573238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.589706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.589730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.605884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.605907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.616668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.616691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.631817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.631840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.642091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.642114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.655907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.655930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.336 [2024-07-15 20:14:25.673236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.336 [2024-07-15 20:14:25.673265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.689155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.689179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.698600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.698622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.713782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.713804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.723362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.723383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.739196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.739219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.749226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.749248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.761005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.761028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.777788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.777811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.795325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.795347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.810615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.810638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.826715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.826737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.844338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.844361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.860447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.860470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.876364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.876386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.893984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.894007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.910016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.910038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.927102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.927126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.595 [2024-07-15 20:14:25.943845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.595 [2024-07-15 20:14:25.943867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:25.960342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:25.960365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:25.976289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:25.976311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:25.987170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:25.987191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.002456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.002478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.019722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.019745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.034511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.034534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.050810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.050833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.066987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.067011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.083536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.083559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.103417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.103440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.118082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.118109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.134559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.134582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.151870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.151894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.167831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.167853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.185400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.185424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.854 [2024-07-15 20:14:26.201412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.854 [2024-07-15 20:14:26.201435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.219097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.219120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.234056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.234079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.250123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.250147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.267521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.267544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.283517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.283541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.294120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.294143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.305208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.305232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.321624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.321648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.331507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.331530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.347230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.347253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.357070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.357093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.372294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.372317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.382234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.382265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.397278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.397305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.414359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.414381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.430301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.430324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.448729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.448753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.114 [2024-07-15 20:14:26.463663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.114 [2024-07-15 20:14:26.463687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.480909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.373 [2024-07-15 20:14:26.480932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.497086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.373 [2024-07-15 20:14:26.497109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.514566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.373 [2024-07-15 20:14:26.514589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.530241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.373 [2024-07-15 20:14:26.530271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.541444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.373 [2024-07-15 20:14:26.541467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.556797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.373 [2024-07-15 20:14:26.556820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.373 [2024-07-15 20:14:26.572744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.572767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.583424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.583446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.598452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.598475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.608468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.608491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.624830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.624853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.642302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.642326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.658026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.658050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.675303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.675328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.691583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.691612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.374 [2024-07-15 20:14:26.707636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.374 [2024-07-15 20:14:26.707659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.726382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.726407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.742080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.742104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.755147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.755171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.765313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.765337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.780069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.780092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.790897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.790920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.806384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.806407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.823444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.823467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.840833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.840856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.856931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.856955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.873101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.873126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.889167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.889191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.905070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.905093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.921098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.921122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.931788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.931811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.946776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.946799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.957088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.957111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.968677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.968705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.633 [2024-07-15 20:14:26.984212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.633 [2024-07-15 20:14:26.984234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.002236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.002266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.017846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.017869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.034776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.034799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.052176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.052199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.068305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.068328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.084741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.084765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.094961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.094983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.110024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.110047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.126185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.126209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.141680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.141703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.158052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.158074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.175281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.175303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.190867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.190890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.208167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.208190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.224297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.224319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.892 [2024-07-15 20:14:27.242741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.892 [2024-07-15 20:14:27.242764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.258521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.258544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.269285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.269307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.284476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.284499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.300564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.300587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.311345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.311368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.325986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.326010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.336803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.336826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.352040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.352063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.369494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.369517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.385222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.385245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.396259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.396282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.412178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.412201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.427466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.427488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.444617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.444640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.460739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.460762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.473491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.473515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.490864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.490887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.501881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.501904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.175 [2024-07-15 20:14:27.516955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.175 [2024-07-15 20:14:27.516979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.433 [2024-07-15 20:14:27.534007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.433 [2024-07-15 20:14:27.534031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.433 [2024-07-15 20:14:27.550371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.433 [2024-07-15 20:14:27.550394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.433 [2024-07-15 20:14:27.566542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.433 [2024-07-15 20:14:27.566566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.433 [2024-07-15 20:14:27.577341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.433 [2024-07-15 20:14:27.577364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.592652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.592675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.612283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.612306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.627417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.627440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.645420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.645443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.661212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.661234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.679044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.679067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.695263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.695286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.710762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.710785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.726583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.726606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.737359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.737381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.752763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.752786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.770465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.770488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.434 [2024-07-15 20:14:27.785321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.434 [2024-07-15 20:14:27.785344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.803261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.803285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.819125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.819147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.836598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.836622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.854282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.854305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.870342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.870365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.887490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.887513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.904199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.904221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.921139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.921162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.937669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.937691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.955365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.955388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.971161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.971183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.981478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.981500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:27.996347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:27.996370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:28.012167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:28.012190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:28.027492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:28.027515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.693 [2024-07-15 20:14:28.044114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.693 [2024-07-15 20:14:28.044138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.061070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.061094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.078013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.078036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.094273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.094296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.110655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.110678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.121286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.121309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.136219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.136243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.152661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.152684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.169125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.169149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.186149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.186172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.202509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.202533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.219130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.219153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.235612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.235635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.253054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.253077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.269150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.269174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.279928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.279951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.952 [2024-07-15 20:14:28.295555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.952 [2024-07-15 20:14:28.295579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.312881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.312906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.328006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.328029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.345699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.345723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.361551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.361574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.371438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.371460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.386735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.386759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.405089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.405113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.420087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.420111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.437105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.437134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.453378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.453401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.470537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.470560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.486039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.486063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.502019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.502042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.210 [2024-07-15 20:14:28.520250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.210 [2024-07-15 20:14:28.520281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.211 [2024-07-15 20:14:28.535888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.211 [2024-07-15 20:14:28.535911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.211 [2024-07-15 20:14:28.551719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.211 [2024-07-15 20:14:28.551742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.468 [2024-07-15 20:14:28.568224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.468 [2024-07-15 20:14:28.568248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.468 [2024-07-15 20:14:28.585919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.468 [2024-07-15 20:14:28.585943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.602315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.602339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.617531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.617554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.633537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.633559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.644421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.644444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.659411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.659434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.669507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.669530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.684505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.684527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.699642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.699665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.717021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.717044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.733128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.733155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.751771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.751795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.767370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.767394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.778196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.778219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.793215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.793238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.804096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.804119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.469 [2024-07-15 20:14:28.819216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.469 [2024-07-15 20:14:28.819239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.829732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.829755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.844579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.844601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.854791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.854813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.869897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.869919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.887186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.887208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.903325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.903347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.919096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.919121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.934757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.934781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.945530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.945553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.960944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.960967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.978150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.978174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:28.994000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:28.994023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:29.004731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:29.004759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:29.019863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:29.019887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:29.030159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:29.030182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:29.045271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:29.045293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:29.062460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:29.062483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.727 [2024-07-15 20:14:29.078389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.727 [2024-07-15 20:14:29.078412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.096109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.096133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.112236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.112264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.128801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.128823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.145832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.145854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.162019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.162042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.180245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.180274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.195993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.196017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.207053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.207076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.221848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.221870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.231961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.231983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.247177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.247199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.263252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.263282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.274049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.274071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.289506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.289534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.305295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.305318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.322265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.322288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.999 [2024-07-15 20:14:29.337377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.999 [2024-07-15 20:14:29.337401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.353678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.353703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.369574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.369599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.385872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.385895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.402064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.402087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.413218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.413241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.428180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.428202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.444486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.444510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.455263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.455286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.470337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.470360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.481546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.481569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.497914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.497937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.509852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.509874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 00:16:04.258 Latency(us) 00:16:04.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.258 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:04.258 Nvme1n1 : 5.01 11293.24 88.23 0.00 0.00 11321.60 5183.30 20971.52 00:16:04.258 =================================================================================================================== 00:16:04.258 Total : 11293.24 88.23 0.00 0.00 11321.60 5183.30 20971.52 00:16:04.258 [2024-07-15 20:14:29.521874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.521894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.533910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.533927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.545946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.545966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.557977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.557995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.570006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.570023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.582035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.582052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.594071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.594087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.258 [2024-07-15 20:14:29.606099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.258 [2024-07-15 20:14:29.606115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.516 [2024-07-15 20:14:29.618135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.618152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.630166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.630180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.642201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.642217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.654237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.654260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.666270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.666284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.678307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.678322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.690339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.690354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.702368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.702383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 [2024-07-15 20:14:29.714404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.517 [2024-07-15 20:14:29.714418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (12605) - No such process 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 12605 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:04.517 delay0 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.517 20:14:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:04.517 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.774 [2024-07-15 20:14:29.891415] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:11.340 Initializing NVMe Controllers 00:16:11.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:11.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:11.340 Initialization complete. Launching workers. 00:16:11.340 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 891 00:16:11.340 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1178, failed to submit 33 00:16:11.340 success 1008, unsuccess 170, failed 0 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.340 rmmod nvme_tcp 00:16:11.340 rmmod nvme_fabrics 00:16:11.340 rmmod nvme_keyring 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 10553 ']' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 10553 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 10553 ']' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 10553 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 10553 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 10553' 00:16:11.340 killing process with pid 10553 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 10553 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 10553 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.340 20:14:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.242 20:14:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.242 00:16:13.242 real 0m31.358s 00:16:13.242 user 0m43.318s 00:16:13.242 sys 0m9.794s 00:16:13.242 20:14:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.242 20:14:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 ************************************ 00:16:13.242 END TEST nvmf_zcopy 00:16:13.242 ************************************ 00:16:13.242 20:14:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:13.242 20:14:38 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:13.242 20:14:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:13.242 20:14:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.242 20:14:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.242 ************************************ 00:16:13.242 START TEST nvmf_nmic 00:16:13.242 ************************************ 00:16:13.242 20:14:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:13.503 * Looking for test storage... 00:16:13.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.503 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.504 20:14:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.784 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:18.785 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:18.785 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:18.785 Found net devices under 0000:af:00.0: cvl_0_0 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:18.785 Found net devices under 0000:af:00.1: cvl_0_1 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.785 20:14:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.785 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.785 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.785 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.785 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:16:19.042 00:16:19.042 --- 10.0.0.2 ping statistics --- 00:16:19.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.042 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:16:19.042 00:16:19.042 --- 10.0.0.1 ping statistics --- 00:16:19.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.042 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=18235 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 18235 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 18235 ']' 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.042 20:14:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.042 [2024-07-15 20:14:44.326365] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:16:19.042 [2024-07-15 20:14:44.326433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.042 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.299 [2024-07-15 20:14:44.414905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.299 [2024-07-15 20:14:44.505197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.300 [2024-07-15 20:14:44.505242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.300 [2024-07-15 20:14:44.505252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.300 [2024-07-15 20:14:44.505267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.300 [2024-07-15 20:14:44.505275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.300 [2024-07-15 20:14:44.505334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.300 [2024-07-15 20:14:44.505434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.300 [2024-07-15 20:14:44.505510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.300 [2024-07-15 20:14:44.505513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 [2024-07-15 20:14:45.322050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 Malloc0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 [2024-07-15 20:14:45.377860] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:20.232 test case1: single bdev can't be used in multiple subsystems 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 [2024-07-15 20:14:45.401783] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:20.232 [2024-07-15 20:14:45.401807] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:20.232 [2024-07-15 20:14:45.401817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.232 request: 00:16:20.232 { 00:16:20.232 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:20.232 "namespace": { 00:16:20.232 "bdev_name": "Malloc0", 00:16:20.232 "no_auto_visible": false 00:16:20.232 }, 00:16:20.232 "method": "nvmf_subsystem_add_ns", 00:16:20.232 "req_id": 1 00:16:20.232 } 00:16:20.232 Got JSON-RPC error response 00:16:20.232 response: 00:16:20.232 { 00:16:20.232 "code": -32602, 00:16:20.232 "message": "Invalid parameters" 00:16:20.232 } 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:20.232 Adding namespace failed - expected result. 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:20.232 test case2: host connect to nvmf target in multiple paths 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 [2024-07-15 20:14:45.413915] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 20:14:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.609 20:14:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:23.011 20:14:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.011 20:14:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:23.011 20:14:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.011 20:14:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:23.011 20:14:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:24.915 20:14:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:24.915 [global] 00:16:24.915 thread=1 00:16:24.915 invalidate=1 00:16:24.915 rw=write 00:16:24.915 time_based=1 00:16:24.915 runtime=1 00:16:24.915 ioengine=libaio 00:16:24.915 direct=1 00:16:24.915 bs=4096 00:16:24.915 iodepth=1 00:16:24.915 norandommap=0 00:16:24.915 numjobs=1 00:16:24.915 00:16:24.915 verify_dump=1 00:16:24.915 verify_backlog=512 00:16:24.915 verify_state_save=0 00:16:24.915 do_verify=1 00:16:24.915 verify=crc32c-intel 00:16:24.915 [job0] 00:16:24.915 filename=/dev/nvme0n1 00:16:24.915 Could not set queue depth (nvme0n1) 00:16:25.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:25.174 fio-3.35 00:16:25.174 Starting 1 thread 00:16:26.552 00:16:26.552 job0: (groupid=0, jobs=1): err= 0: pid=19578: Mon Jul 15 20:14:51 2024 00:16:26.552 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:16:26.552 slat (nsec): min=9787, max=23704, avg=19865.30, stdev=3061.16 00:16:26.552 clat (usec): min=382, max=41051, avg=39188.96, stdev=8459.75 00:16:26.552 lat (usec): min=393, max=41071, avg=39208.82, stdev=8461.69 00:16:26.552 clat percentiles (usec): 00:16:26.552 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:26.552 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:26.552 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:26.552 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:26.552 | 99.99th=[41157] 00:16:26.552 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:16:26.552 slat (nsec): min=11234, max=46527, avg=12499.90, stdev=2893.93 00:16:26.552 clat (usec): min=184, max=368, avg=214.52, stdev= 9.81 00:16:26.552 lat (usec): min=202, max=408, avg=227.02, stdev=10.48 00:16:26.552 clat percentiles (usec): 00:16:26.552 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:16:26.552 | 30.00th=[ 212], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 217], 00:16:26.552 | 70.00th=[ 217], 80.00th=[ 219], 90.00th=[ 223], 95.00th=[ 225], 00:16:26.552 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 367], 99.95th=[ 367], 00:16:26.552 | 99.99th=[ 367] 00:16:26.552 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:26.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:26.552 lat (usec) : 250=95.33%, 500=0.56% 00:16:26.552 lat (msec) : 50=4.11% 00:16:26.552 cpu : usr=0.49%, sys=0.79%, ctx=535, majf=0, minf=2 00:16:26.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.552 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:26.552 00:16:26.552 Run status group 0 (all jobs): 00:16:26.552 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:16:26.552 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:16:26.552 00:16:26.552 Disk stats (read/write): 00:16:26.552 nvme0n1: ios=70/512, merge=0/0, ticks=818/106, in_queue=924, util=92.08% 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.552 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.552 rmmod nvme_tcp 00:16:26.552 rmmod nvme_fabrics 00:16:26.552 rmmod nvme_keyring 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 18235 ']' 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 18235 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 18235 ']' 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 18235 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18235 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18235' 00:16:26.811 killing process with pid 18235 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 18235 00:16:26.811 20:14:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 18235 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.069 20:14:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.043 20:14:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:29.043 00:16:29.043 real 0m15.666s 00:16:29.043 user 0m43.298s 00:16:29.043 sys 0m4.995s 00:16:29.043 20:14:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:29.043 20:14:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.043 ************************************ 00:16:29.043 END TEST nvmf_nmic 00:16:29.043 ************************************ 00:16:29.043 20:14:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:29.043 20:14:54 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:29.043 20:14:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:29.043 20:14:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.043 20:14:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.043 ************************************ 00:16:29.043 START TEST nvmf_fio_target 00:16:29.043 ************************************ 00:16:29.043 20:14:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:29.303 * Looking for test storage... 00:16:29.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.303 20:14:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.575 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.575 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.575 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.575 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.575 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:16:34.576 00:16:34.576 --- 10.0.0.2 ping statistics --- 00:16:34.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.576 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:34.576 00:16:34.576 --- 10.0.0.1 ping statistics --- 00:16:34.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.576 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=23331 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 23331 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 23331 ']' 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.576 [2024-07-15 20:14:59.426053] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:16:34.576 [2024-07-15 20:14:59.426095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.576 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.576 [2024-07-15 20:14:59.498576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.576 [2024-07-15 20:14:59.590738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.576 [2024-07-15 20:14:59.590780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.576 [2024-07-15 20:14:59.590791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.576 [2024-07-15 20:14:59.590805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.576 [2024-07-15 20:14:59.590812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.576 [2024-07-15 20:14:59.590858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.576 [2024-07-15 20:14:59.590955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.576 [2024-07-15 20:14:59.591046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.576 [2024-07-15 20:14:59.591048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.576 20:14:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:34.835 [2024-07-15 20:14:59.978745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.835 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.093 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:35.093 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.351 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:35.351 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.609 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:35.609 20:15:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:35.868 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:35.868 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:36.126 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.383 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:36.383 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.642 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:36.642 20:15:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.901 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:36.901 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:37.159 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:37.417 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:37.417 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.675 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:37.675 20:15:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:37.931 20:15:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.188 [2024-07-15 20:15:03.358947] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.188 20:15:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:38.445 20:15:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:38.702 20:15:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.075 20:15:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:40.075 20:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:40.075 20:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.075 20:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:40.075 20:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:40.075 20:15:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:41.981 20:15:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:41.981 [global] 00:16:41.981 thread=1 00:16:41.981 invalidate=1 00:16:41.981 rw=write 00:16:41.981 time_based=1 00:16:41.981 runtime=1 00:16:41.981 ioengine=libaio 00:16:41.981 direct=1 00:16:41.981 bs=4096 00:16:41.981 iodepth=1 00:16:41.981 norandommap=0 00:16:41.981 numjobs=1 00:16:41.981 00:16:41.981 verify_dump=1 00:16:41.981 verify_backlog=512 00:16:41.981 verify_state_save=0 00:16:41.981 do_verify=1 00:16:41.981 verify=crc32c-intel 00:16:41.981 [job0] 00:16:41.981 filename=/dev/nvme0n1 00:16:41.981 [job1] 00:16:41.981 filename=/dev/nvme0n2 00:16:41.981 [job2] 00:16:41.981 filename=/dev/nvme0n3 00:16:41.981 [job3] 00:16:41.981 filename=/dev/nvme0n4 00:16:41.981 Could not set queue depth (nvme0n1) 00:16:41.981 Could not set queue depth (nvme0n2) 00:16:41.981 Could not set queue depth (nvme0n3) 00:16:41.981 Could not set queue depth (nvme0n4) 00:16:42.239 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.239 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.239 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.239 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.239 fio-3.35 00:16:42.239 Starting 4 threads 00:16:43.617 00:16:43.617 job0: (groupid=0, jobs=1): err= 0: pid=25092: Mon Jul 15 20:15:08 2024 00:16:43.617 read: IOPS=1206, BW=4827KiB/s (4943kB/s)(4832KiB/1001msec) 00:16:43.617 slat (nsec): min=4513, max=40447, avg=9163.21, stdev=2296.91 00:16:43.617 clat (usec): min=245, max=42098, avg=521.12, stdev=2350.39 00:16:43.617 lat (usec): min=252, max=42114, avg=530.28, stdev=2350.75 00:16:43.617 clat percentiles (usec): 00:16:43.617 | 1.00th=[ 273], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 330], 00:16:43.617 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:16:43.617 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 449], 95.00th=[ 469], 00:16:43.617 | 99.00th=[ 1090], 99.50th=[ 1844], 99.90th=[40633], 99.95th=[42206], 00:16:43.617 | 99.99th=[42206] 00:16:43.617 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:43.617 slat (nsec): min=6045, max=40559, avg=11381.52, stdev=2550.73 00:16:43.617 clat (usec): min=171, max=1617, avg=217.18, stdev=40.49 00:16:43.617 lat (usec): min=177, max=1629, avg=228.56, stdev=40.85 00:16:43.617 clat percentiles (usec): 00:16:43.617 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:16:43.617 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:16:43.617 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 247], 00:16:43.617 | 99.00th=[ 273], 99.50th=[ 306], 99.90th=[ 416], 99.95th=[ 1614], 00:16:43.617 | 99.99th=[ 1614] 00:16:43.617 bw ( KiB/s): min= 4328, max= 4328, per=23.37%, avg=4328.00, stdev= 0.00, samples=1 00:16:43.617 iops : min= 1082, max= 1082, avg=1082.00, stdev= 0.00, samples=1 00:16:43.617 lat (usec) : 250=53.83%, 500=44.83%, 750=0.69%, 1000=0.15% 00:16:43.617 lat (msec) : 2=0.29%, 4=0.04%, 50=0.18% 00:16:43.617 cpu : usr=2.20%, sys=4.30%, ctx=2744, majf=0, minf=1 00:16:43.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.617 issued rwts: total=1208,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.617 job1: (groupid=0, jobs=1): err= 0: pid=25093: Mon Jul 15 20:15:08 2024 00:16:43.617 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:16:43.617 slat (nsec): min=9423, max=24795, avg=20722.32, stdev=3281.10 00:16:43.617 clat (usec): min=40864, max=41429, avg=40999.75, stdev=131.44 00:16:43.617 lat (usec): min=40886, max=41438, avg=41020.47, stdev=130.23 00:16:43.617 clat percentiles (usec): 00:16:43.617 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:43.617 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:43.617 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:43.617 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:43.617 | 99.99th=[41681] 00:16:43.617 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:43.617 slat (nsec): min=10620, max=37382, avg=12279.59, stdev=1938.91 00:16:43.618 clat (usec): min=191, max=425, avg=249.49, stdev=21.45 00:16:43.618 lat (usec): min=202, max=463, avg=261.77, stdev=21.94 00:16:43.618 clat percentiles (usec): 00:16:43.618 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 233], 00:16:43.618 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:16:43.618 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:16:43.618 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 424], 99.95th=[ 424], 00:16:43.618 | 99.99th=[ 424] 00:16:43.618 bw ( KiB/s): min= 4096, max= 4096, per=22.11%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.618 lat (usec) : 250=49.63%, 500=46.25% 00:16:43.618 lat (msec) : 50=4.12% 00:16:43.618 cpu : usr=0.19%, sys=1.06%, ctx=535, majf=0, minf=1 00:16:43.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.618 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.618 job2: (groupid=0, jobs=1): err= 0: pid=25094: Mon Jul 15 20:15:08 2024 00:16:43.618 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:16:43.618 slat (nsec): min=9915, max=25256, avg=17814.64, stdev=5471.56 00:16:43.618 clat (usec): min=40853, max=41422, avg=40998.08, stdev=109.57 00:16:43.618 lat (usec): min=40874, max=41432, avg=41015.90, stdev=107.82 00:16:43.618 clat percentiles (usec): 00:16:43.618 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:43.618 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:43.618 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:43.618 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:43.618 | 99.99th=[41681] 00:16:43.618 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:16:43.618 slat (nsec): min=11070, max=39080, avg=12988.81, stdev=2144.69 00:16:43.618 clat (usec): min=192, max=334, avg=248.62, stdev=19.56 00:16:43.618 lat (usec): min=204, max=346, avg=261.61, stdev=19.85 00:16:43.618 clat percentiles (usec): 00:16:43.618 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 235], 00:16:43.618 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:16:43.618 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:16:43.618 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 334], 99.95th=[ 334], 00:16:43.618 | 99.99th=[ 334] 00:16:43.618 bw ( KiB/s): min= 4096, max= 4096, per=22.11%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.618 lat (usec) : 250=50.37%, 500=45.51% 00:16:43.618 lat (msec) : 50=4.12% 00:16:43.618 cpu : usr=0.77%, sys=0.58%, ctx=535, majf=0, minf=2 00:16:43.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.618 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.618 job3: (groupid=0, jobs=1): err= 0: pid=25095: Mon Jul 15 20:15:08 2024 00:16:43.618 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:43.618 slat (nsec): min=6395, max=28995, avg=7470.78, stdev=1237.93 00:16:43.618 clat (usec): min=186, max=1281, avg=256.62, stdev=43.38 00:16:43.618 lat (usec): min=193, max=1289, avg=264.09, stdev=43.53 00:16:43.618 clat percentiles (usec): 00:16:43.618 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 237], 00:16:43.618 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:16:43.618 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 302], 00:16:43.618 | 99.00th=[ 379], 99.50th=[ 433], 99.90th=[ 578], 99.95th=[ 1254], 00:16:43.618 | 99.99th=[ 1287] 00:16:43.618 write: IOPS=2253, BW=9015KiB/s (9231kB/s)(9024KiB/1001msec); 0 zone resets 00:16:43.618 slat (nsec): min=9510, max=65460, avg=12191.11, stdev=4876.32 00:16:43.618 clat (usec): min=140, max=474, avg=186.10, stdev=26.17 00:16:43.618 lat (usec): min=150, max=485, avg=198.29, stdev=27.98 00:16:43.618 clat percentiles (usec): 00:16:43.618 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:16:43.618 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:16:43.618 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 227], 95.00th=[ 239], 00:16:43.618 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 383], 99.95th=[ 392], 00:16:43.618 | 99.99th=[ 474] 00:16:43.618 bw ( KiB/s): min= 8352, max= 8352, per=45.09%, avg=8352.00, stdev= 0.00, samples=1 00:16:43.618 iops : min= 2088, max= 2088, avg=2088.00, stdev= 0.00, samples=1 00:16:43.618 lat (usec) : 250=74.16%, 500=25.74%, 750=0.05% 00:16:43.618 lat (msec) : 2=0.05% 00:16:43.618 cpu : usr=2.70%, sys=4.30%, ctx=4305, majf=0, minf=1 00:16:43.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.618 issued rwts: total=2048,2256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.618 00:16:43.618 Run status group 0 (all jobs): 00:16:43.618 READ: bw=12.4MiB/s (13.0MB/s), 84.6KiB/s-8184KiB/s (86.6kB/s-8380kB/s), io=12.9MiB (13.5MB), run=1001-1040msec 00:16:43.618 WRITE: bw=18.1MiB/s (19.0MB/s), 1969KiB/s-9015KiB/s (2016kB/s-9231kB/s), io=18.8MiB (19.7MB), run=1001-1040msec 00:16:43.618 00:16:43.618 Disk stats (read/write): 00:16:43.618 nvme0n1: ios=1074/1224, merge=0/0, ticks=558/256, in_queue=814, util=87.07% 00:16:43.618 nvme0n2: ios=43/512, merge=0/0, ticks=1604/113, in_queue=1717, util=89.84% 00:16:43.618 nvme0n3: ios=41/512, merge=0/0, ticks=1604/126, in_queue=1730, util=93.44% 00:16:43.618 nvme0n4: ios=1656/2048, merge=0/0, ticks=1308/359, in_queue=1667, util=94.13% 00:16:43.618 20:15:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:43.618 [global] 00:16:43.618 thread=1 00:16:43.618 invalidate=1 00:16:43.618 rw=randwrite 00:16:43.618 time_based=1 00:16:43.618 runtime=1 00:16:43.618 ioengine=libaio 00:16:43.618 direct=1 00:16:43.618 bs=4096 00:16:43.618 iodepth=1 00:16:43.618 norandommap=0 00:16:43.618 numjobs=1 00:16:43.618 00:16:43.618 verify_dump=1 00:16:43.618 verify_backlog=512 00:16:43.618 verify_state_save=0 00:16:43.618 do_verify=1 00:16:43.618 verify=crc32c-intel 00:16:43.618 [job0] 00:16:43.618 filename=/dev/nvme0n1 00:16:43.618 [job1] 00:16:43.618 filename=/dev/nvme0n2 00:16:43.618 [job2] 00:16:43.618 filename=/dev/nvme0n3 00:16:43.618 [job3] 00:16:43.618 filename=/dev/nvme0n4 00:16:43.618 Could not set queue depth (nvme0n1) 00:16:43.618 Could not set queue depth (nvme0n2) 00:16:43.618 Could not set queue depth (nvme0n3) 00:16:43.618 Could not set queue depth (nvme0n4) 00:16:44.183 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.183 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.183 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.183 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.183 fio-3.35 00:16:44.183 Starting 4 threads 00:16:45.584 00:16:45.584 job0: (groupid=0, jobs=1): err= 0: pid=25785: Mon Jul 15 20:15:10 2024 00:16:45.584 read: IOPS=176, BW=706KiB/s (723kB/s)(724KiB/1025msec) 00:16:45.584 slat (nsec): min=7378, max=30121, avg=8876.49, stdev=2415.41 00:16:45.584 clat (usec): min=299, max=42011, avg=4912.47, stdev=12800.06 00:16:45.584 lat (usec): min=306, max=42025, avg=4921.35, stdev=12801.45 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:16:45.584 | 30.00th=[ 355], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 457], 00:16:45.584 | 70.00th=[ 490], 80.00th=[ 510], 90.00th=[40633], 95.00th=[41157], 00:16:45.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:45.584 | 99.99th=[42206] 00:16:45.584 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:16:45.584 slat (nsec): min=10688, max=45294, avg=13556.18, stdev=2613.35 00:16:45.584 clat (usec): min=169, max=646, avg=243.65, stdev=62.11 00:16:45.584 lat (usec): min=181, max=659, avg=257.20, stdev=62.72 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 198], 00:16:45.584 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 237], 00:16:45.584 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 343], 95.00th=[ 363], 00:16:45.584 | 99.00th=[ 465], 99.50th=[ 502], 99.90th=[ 644], 99.95th=[ 644], 00:16:45.584 | 99.99th=[ 644] 00:16:45.584 bw ( KiB/s): min= 4096, max= 4096, per=23.55%, avg=4096.00, stdev= 0.00, samples=1 00:16:45.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:45.584 lat (usec) : 250=51.08%, 500=41.99%, 750=4.04% 00:16:45.584 lat (msec) : 50=2.89% 00:16:45.584 cpu : usr=0.49%, sys=1.27%, ctx=694, majf=0, minf=1 00:16:45.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.584 issued rwts: total=181,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.584 job1: (groupid=0, jobs=1): err= 0: pid=25791: Mon Jul 15 20:15:10 2024 00:16:45.584 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:45.584 slat (nsec): min=7069, max=39306, avg=8162.86, stdev=2214.40 00:16:45.584 clat (usec): min=255, max=41096, avg=718.36, stdev=3995.92 00:16:45.584 lat (usec): min=263, max=41114, avg=726.52, stdev=3997.01 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:16:45.584 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:16:45.584 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 420], 95.00th=[ 433], 00:16:45.584 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:45.584 | 99.99th=[41157] 00:16:45.584 write: IOPS=1177, BW=4711KiB/s (4824kB/s)(4716KiB/1001msec); 0 zone resets 00:16:45.584 slat (nsec): min=4972, max=63514, avg=10231.07, stdev=3969.79 00:16:45.584 clat (usec): min=142, max=464, avg=201.45, stdev=47.92 00:16:45.584 lat (usec): min=157, max=471, avg=211.68, stdev=46.75 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:16:45.584 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:16:45.584 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 243], 95.00th=[ 343], 00:16:45.584 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 465], 00:16:45.584 | 99.99th=[ 465] 00:16:45.584 bw ( KiB/s): min= 4096, max= 4096, per=23.55%, avg=4096.00, stdev= 0.00, samples=1 00:16:45.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:45.584 lat (usec) : 250=48.34%, 500=51.02%, 750=0.18% 00:16:45.584 lat (msec) : 50=0.45% 00:16:45.584 cpu : usr=1.90%, sys=2.80%, ctx=2206, majf=0, minf=1 00:16:45.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.584 issued rwts: total=1024,1179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.584 job2: (groupid=0, jobs=1): err= 0: pid=25799: Mon Jul 15 20:15:10 2024 00:16:45.584 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:45.584 slat (nsec): min=6426, max=23261, avg=7620.43, stdev=1067.61 00:16:45.584 clat (usec): min=289, max=1515, avg=357.46, stdev=61.54 00:16:45.584 lat (usec): min=297, max=1522, avg=365.08, stdev=61.48 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:16:45.584 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:16:45.584 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 486], 00:16:45.584 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 1188], 99.95th=[ 1516], 00:16:45.584 | 99.99th=[ 1516] 00:16:45.584 write: IOPS=1740, BW=6961KiB/s (7128kB/s)(6968KiB/1001msec); 0 zone resets 00:16:45.584 slat (usec): min=9, max=123, avg=11.75, stdev= 3.72 00:16:45.584 clat (usec): min=181, max=501, avg=235.58, stdev=31.57 00:16:45.584 lat (usec): min=192, max=624, avg=247.33, stdev=33.01 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:16:45.584 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:16:45.584 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 289], 00:16:45.584 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 486], 99.95th=[ 502], 00:16:45.584 | 99.99th=[ 502] 00:16:45.584 bw ( KiB/s): min= 8192, max= 8192, per=47.10%, avg=8192.00, stdev= 0.00, samples=1 00:16:45.584 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:45.584 lat (usec) : 250=41.55%, 500=56.89%, 750=1.43%, 1000=0.06% 00:16:45.584 lat (msec) : 2=0.06% 00:16:45.584 cpu : usr=1.80%, sys=4.10%, ctx=3280, majf=0, minf=1 00:16:45.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.584 issued rwts: total=1536,1742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.584 job3: (groupid=0, jobs=1): err= 0: pid=25808: Mon Jul 15 20:15:10 2024 00:16:45.584 read: IOPS=515, BW=2061KiB/s (2110kB/s)(2104KiB/1021msec) 00:16:45.584 slat (nsec): min=7082, max=31900, avg=8695.75, stdev=3007.17 00:16:45.584 clat (usec): min=264, max=41914, avg=1403.19, stdev=6570.35 00:16:45.584 lat (usec): min=272, max=41938, avg=1411.88, stdev=6572.73 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:16:45.584 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:16:45.584 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 363], 00:16:45.584 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:45.584 | 99.99th=[41681] 00:16:45.584 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:16:45.584 slat (nsec): min=10309, max=45723, avg=12486.86, stdev=2676.68 00:16:45.584 clat (usec): min=186, max=1564, avg=254.42, stdev=73.70 00:16:45.584 lat (usec): min=197, max=1576, avg=266.91, stdev=74.21 00:16:45.584 clat percentiles (usec): 00:16:45.584 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:16:45.584 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:16:45.585 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 359], 00:16:45.585 | 99.00th=[ 486], 99.50th=[ 570], 99.90th=[ 1221], 99.95th=[ 1565], 00:16:45.585 | 99.99th=[ 1565] 00:16:45.585 bw ( KiB/s): min= 8192, max= 8192, per=47.10%, avg=8192.00, stdev= 0.00, samples=1 00:16:45.585 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:45.585 lat (usec) : 250=46.77%, 500=51.55%, 750=0.45%, 1000=0.13% 00:16:45.585 lat (msec) : 2=0.19%, 50=0.90% 00:16:45.585 cpu : usr=1.96%, sys=2.06%, ctx=1550, majf=0, minf=1 00:16:45.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.585 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.585 00:16:45.585 Run status group 0 (all jobs): 00:16:45.585 READ: bw=12.5MiB/s (13.1MB/s), 706KiB/s-6138KiB/s (723kB/s-6285kB/s), io=12.8MiB (13.4MB), run=1001-1025msec 00:16:45.585 WRITE: bw=17.0MiB/s (17.8MB/s), 1998KiB/s-6961KiB/s (2046kB/s-7128kB/s), io=17.4MiB (18.3MB), run=1001-1025msec 00:16:45.585 00:16:45.585 Disk stats (read/write): 00:16:45.585 nvme0n1: ios=233/512, merge=0/0, ticks=1114/121, in_queue=1235, util=97.27% 00:16:45.585 nvme0n2: ios=541/712, merge=0/0, ticks=1527/150, in_queue=1677, util=96.87% 00:16:45.585 nvme0n3: ios=1046/1475, merge=0/0, ticks=1270/340, in_queue=1610, util=96.59% 00:16:45.585 nvme0n4: ios=525/1024, merge=0/0, ticks=686/242, in_queue=928, util=91.12% 00:16:45.585 20:15:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:45.585 [global] 00:16:45.585 thread=1 00:16:45.585 invalidate=1 00:16:45.585 rw=write 00:16:45.585 time_based=1 00:16:45.585 runtime=1 00:16:45.585 ioengine=libaio 00:16:45.585 direct=1 00:16:45.585 bs=4096 00:16:45.585 iodepth=128 00:16:45.585 norandommap=0 00:16:45.585 numjobs=1 00:16:45.585 00:16:45.585 verify_dump=1 00:16:45.585 verify_backlog=512 00:16:45.585 verify_state_save=0 00:16:45.585 do_verify=1 00:16:45.585 verify=crc32c-intel 00:16:45.585 [job0] 00:16:45.585 filename=/dev/nvme0n1 00:16:45.585 [job1] 00:16:45.585 filename=/dev/nvme0n2 00:16:45.585 [job2] 00:16:45.585 filename=/dev/nvme0n3 00:16:45.585 [job3] 00:16:45.585 filename=/dev/nvme0n4 00:16:45.585 Could not set queue depth (nvme0n1) 00:16:45.585 Could not set queue depth (nvme0n2) 00:16:45.585 Could not set queue depth (nvme0n3) 00:16:45.585 Could not set queue depth (nvme0n4) 00:16:45.844 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.844 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.844 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.844 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.844 fio-3.35 00:16:45.844 Starting 4 threads 00:16:47.219 00:16:47.219 job0: (groupid=0, jobs=1): err= 0: pid=26333: Mon Jul 15 20:15:12 2024 00:16:47.219 read: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1022msec) 00:16:47.219 slat (nsec): min=1571, max=34246k, avg=180078.44, stdev=1341271.54 00:16:47.219 clat (usec): min=3645, max=85426, avg=23425.99, stdev=15045.01 00:16:47.219 lat (usec): min=3653, max=85465, avg=23606.07, stdev=15146.73 00:16:47.219 clat percentiles (usec): 00:16:47.219 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[11207], 20.00th=[12911], 00:16:47.219 | 30.00th=[13304], 40.00th=[14222], 50.00th=[16712], 60.00th=[21890], 00:16:47.219 | 70.00th=[26870], 80.00th=[32113], 90.00th=[48497], 95.00th=[54264], 00:16:47.219 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[82314], 00:16:47.219 | 99.99th=[85459] 00:16:47.219 write: IOPS=3183, BW=12.4MiB/s (13.0MB/s)(12.7MiB/1022msec); 0 zone resets 00:16:47.219 slat (usec): min=2, max=19679, avg=124.72, stdev=894.60 00:16:47.219 clat (usec): min=1496, max=65961, avg=17752.64, stdev=9474.85 00:16:47.219 lat (usec): min=1512, max=65997, avg=17877.36, stdev=9524.26 00:16:47.219 clat percentiles (usec): 00:16:47.219 | 1.00th=[ 3884], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[11076], 00:16:47.219 | 30.00th=[12387], 40.00th=[13566], 50.00th=[14353], 60.00th=[18482], 00:16:47.219 | 70.00th=[19268], 80.00th=[23987], 90.00th=[31327], 95.00th=[31589], 00:16:47.219 | 99.00th=[54264], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:16:47.219 | 99.99th=[65799] 00:16:47.219 bw ( KiB/s): min= 8624, max=16384, per=25.79%, avg=12504.00, stdev=5487.15, samples=2 00:16:47.219 iops : min= 2156, max= 4096, avg=3126.00, stdev=1371.79, samples=2 00:16:47.219 lat (msec) : 2=0.03%, 4=0.79%, 10=9.07%, 20=55.17%, 50=29.24% 00:16:47.219 lat (msec) : 100=5.69% 00:16:47.219 cpu : usr=2.15%, sys=4.02%, ctx=253, majf=0, minf=1 00:16:47.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:47.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.219 issued rwts: total=3072,3254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.219 job1: (groupid=0, jobs=1): err= 0: pid=26339: Mon Jul 15 20:15:12 2024 00:16:47.219 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:16:47.219 slat (usec): min=2, max=19460, avg=104.56, stdev=650.69 00:16:47.219 clat (usec): min=7600, max=38209, avg=14639.19, stdev=4533.12 00:16:47.219 lat (usec): min=8358, max=39074, avg=14743.75, stdev=4571.50 00:16:47.219 clat percentiles (usec): 00:16:47.219 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11076], 20.00th=[11731], 00:16:47.219 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13566], 60.00th=[14091], 00:16:47.219 | 70.00th=[14877], 80.00th=[16319], 90.00th=[19006], 95.00th=[25035], 00:16:47.219 | 99.00th=[35390], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:16:47.219 | 99.99th=[38011] 00:16:47.219 write: IOPS=4524, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1006msec); 0 zone resets 00:16:47.219 slat (usec): min=3, max=13949, avg=112.85, stdev=764.61 00:16:47.219 clat (usec): min=4984, max=43112, avg=14793.72, stdev=4509.66 00:16:47.219 lat (usec): min=4999, max=50457, avg=14906.57, stdev=4577.11 00:16:47.219 clat percentiles (usec): 00:16:47.219 | 1.00th=[ 8356], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:16:47.219 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13566], 60.00th=[13960], 00:16:47.219 | 70.00th=[15533], 80.00th=[16909], 90.00th=[21627], 95.00th=[23462], 00:16:47.219 | 99.00th=[28181], 99.50th=[28181], 99.90th=[40633], 99.95th=[40633], 00:16:47.219 | 99.99th=[43254] 00:16:47.219 bw ( KiB/s): min=16384, max=19016, per=36.50%, avg=17700.00, stdev=1861.11, samples=2 00:16:47.219 iops : min= 4096, max= 4754, avg=4425.00, stdev=465.28, samples=2 00:16:47.219 lat (msec) : 10=3.11%, 20=84.54%, 50=12.35% 00:16:47.219 cpu : usr=4.58%, sys=6.17%, ctx=290, majf=0, minf=1 00:16:47.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:47.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.219 issued rwts: total=4096,4552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.219 job2: (groupid=0, jobs=1): err= 0: pid=26345: Mon Jul 15 20:15:12 2024 00:16:47.219 read: IOPS=1739, BW=6960KiB/s (7127kB/s)(7120KiB/1023msec) 00:16:47.219 slat (usec): min=2, max=33383, avg=265.86, stdev=1752.08 00:16:47.219 clat (msec): min=4, max=144, avg=30.52, stdev=22.36 00:16:47.219 lat (msec): min=7, max=144, avg=30.79, stdev=22.55 00:16:47.219 clat percentiles (msec): 00:16:47.219 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:16:47.219 | 30.00th=[ 20], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 30], 00:16:47.219 | 70.00th=[ 30], 80.00th=[ 32], 90.00th=[ 42], 95.00th=[ 78], 00:16:47.220 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:16:47.220 | 99.99th=[ 146] 00:16:47.220 write: IOPS=2001, BW=8008KiB/s (8200kB/s)(8192KiB/1023msec); 0 zone resets 00:16:47.220 slat (usec): min=3, max=16697, avg=243.90, stdev=1178.55 00:16:47.220 clat (usec): min=1524, max=144803, avg=36849.90, stdev=24042.41 00:16:47.220 lat (usec): min=1537, max=144812, avg=37093.81, stdev=24141.00 00:16:47.220 clat percentiles (msec): 00:16:47.220 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 17], 00:16:47.220 | 30.00th=[ 23], 40.00th=[ 28], 50.00th=[ 33], 60.00th=[ 38], 00:16:47.220 | 70.00th=[ 42], 80.00th=[ 50], 90.00th=[ 69], 95.00th=[ 96], 00:16:47.220 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 120], 99.95th=[ 146], 00:16:47.220 | 99.99th=[ 146] 00:16:47.220 bw ( KiB/s): min= 7568, max= 8816, per=16.89%, avg=8192.00, stdev=882.47, samples=2 00:16:47.220 iops : min= 1892, max= 2204, avg=2048.00, stdev=220.62, samples=2 00:16:47.220 lat (msec) : 2=0.29%, 4=0.21%, 10=0.89%, 20=28.47%, 50=57.94% 00:16:47.220 lat (msec) : 100=8.49%, 250=3.71% 00:16:47.220 cpu : usr=2.15%, sys=2.94%, ctx=247, majf=0, minf=1 00:16:47.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:47.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.220 issued rwts: total=1780,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.220 job3: (groupid=0, jobs=1): err= 0: pid=26350: Mon Jul 15 20:15:12 2024 00:16:47.220 read: IOPS=2123, BW=8492KiB/s (8696kB/s)(8696KiB/1024msec) 00:16:47.220 slat (usec): min=2, max=24429, avg=206.79, stdev=1496.32 00:16:47.220 clat (usec): min=4817, max=66224, avg=25333.70, stdev=11473.56 00:16:47.220 lat (usec): min=6956, max=66229, avg=25540.49, stdev=11570.88 00:16:47.220 clat percentiles (usec): 00:16:47.220 | 1.00th=[11994], 5.00th=[12780], 10.00th=[15008], 20.00th=[15664], 00:16:47.220 | 30.00th=[16319], 40.00th=[19268], 50.00th=[25035], 60.00th=[26608], 00:16:47.220 | 70.00th=[29492], 80.00th=[30016], 90.00th=[41157], 95.00th=[51119], 00:16:47.220 | 99.00th=[61604], 99.50th=[64750], 99.90th=[66323], 99.95th=[66323], 00:16:47.220 | 99.99th=[66323] 00:16:47.220 write: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1024msec); 0 zone resets 00:16:47.220 slat (usec): min=3, max=24849, avg=212.20, stdev=1323.54 00:16:47.220 clat (usec): min=1880, max=112275, avg=28342.01, stdev=19622.92 00:16:47.220 lat (msec): min=3, max=112, avg=28.55, stdev=19.73 00:16:47.220 clat percentiles (msec): 00:16:47.220 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 15], 00:16:47.220 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 27], 00:16:47.220 | 70.00th=[ 35], 80.00th=[ 42], 90.00th=[ 51], 95.00th=[ 69], 00:16:47.220 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 113], 99.95th=[ 113], 00:16:47.220 | 99.99th=[ 113] 00:16:47.220 bw ( KiB/s): min= 9520, max=10944, per=21.10%, avg=10232.00, stdev=1006.92, samples=2 00:16:47.220 iops : min= 2380, max= 2736, avg=2558.00, stdev=251.73, samples=2 00:16:47.220 lat (msec) : 2=0.02%, 4=0.34%, 10=3.25%, 20=41.97%, 50=46.73% 00:16:47.220 lat (msec) : 100=6.97%, 250=0.72% 00:16:47.220 cpu : usr=2.44%, sys=3.71%, ctx=231, majf=0, minf=1 00:16:47.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:47.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.220 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.220 00:16:47.220 Run status group 0 (all jobs): 00:16:47.220 READ: bw=42.4MiB/s (44.5MB/s), 6960KiB/s-15.9MiB/s (7127kB/s-16.7MB/s), io=43.4MiB (45.6MB), run=1006-1024msec 00:16:47.220 WRITE: bw=47.4MiB/s (49.7MB/s), 8008KiB/s-17.7MiB/s (8200kB/s-18.5MB/s), io=48.5MiB (50.8MB), run=1006-1024msec 00:16:47.220 00:16:47.220 Disk stats (read/write): 00:16:47.220 nvme0n1: ios=2610/2735, merge=0/0, ticks=30008/24435, in_queue=54443, util=87.37% 00:16:47.220 nvme0n2: ios=3609/3873, merge=0/0, ticks=28911/28389, in_queue=57300, util=93.20% 00:16:47.220 nvme0n3: ios=1593/1567, merge=0/0, ticks=46119/57871, in_queue=103990, util=95.01% 00:16:47.220 nvme0n4: ios=2093/2063, merge=0/0, ticks=47555/48880, in_queue=96435, util=96.65% 00:16:47.220 20:15:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:47.220 [global] 00:16:47.220 thread=1 00:16:47.220 invalidate=1 00:16:47.220 rw=randwrite 00:16:47.220 time_based=1 00:16:47.220 runtime=1 00:16:47.220 ioengine=libaio 00:16:47.220 direct=1 00:16:47.220 bs=4096 00:16:47.220 iodepth=128 00:16:47.220 norandommap=0 00:16:47.220 numjobs=1 00:16:47.220 00:16:47.220 verify_dump=1 00:16:47.220 verify_backlog=512 00:16:47.220 verify_state_save=0 00:16:47.220 do_verify=1 00:16:47.220 verify=crc32c-intel 00:16:47.220 [job0] 00:16:47.220 filename=/dev/nvme0n1 00:16:47.220 [job1] 00:16:47.220 filename=/dev/nvme0n2 00:16:47.220 [job2] 00:16:47.220 filename=/dev/nvme0n3 00:16:47.220 [job3] 00:16:47.220 filename=/dev/nvme0n4 00:16:47.220 Could not set queue depth (nvme0n1) 00:16:47.220 Could not set queue depth (nvme0n2) 00:16:47.220 Could not set queue depth (nvme0n3) 00:16:47.220 Could not set queue depth (nvme0n4) 00:16:47.477 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.477 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.477 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.477 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.477 fio-3.35 00:16:47.477 Starting 4 threads 00:16:48.877 00:16:48.877 job0: (groupid=0, jobs=1): err= 0: pid=26782: Mon Jul 15 20:15:13 2024 00:16:48.877 read: IOPS=4623, BW=18.1MiB/s (18.9MB/s)(18.2MiB/1010msec) 00:16:48.877 slat (nsec): min=1817, max=11069k, avg=102946.37, stdev=688546.99 00:16:48.877 clat (usec): min=3672, max=35281, avg=13837.36, stdev=5694.56 00:16:48.877 lat (usec): min=3720, max=35306, avg=13940.30, stdev=5757.27 00:16:48.877 clat percentiles (usec): 00:16:48.877 | 1.00th=[ 5735], 5.00th=[ 7767], 10.00th=[ 8291], 20.00th=[ 8848], 00:16:48.877 | 30.00th=[ 9241], 40.00th=[10814], 50.00th=[11994], 60.00th=[13304], 00:16:48.877 | 70.00th=[17171], 80.00th=[19530], 90.00th=[21890], 95.00th=[23725], 00:16:48.877 | 99.00th=[28443], 99.50th=[29230], 99.90th=[29492], 99.95th=[32900], 00:16:48.877 | 99.99th=[35390] 00:16:48.877 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:16:48.877 slat (usec): min=2, max=29965, avg=87.31, stdev=755.03 00:16:48.877 clat (usec): min=704, max=50923, avg=12354.73, stdev=7300.79 00:16:48.877 lat (usec): min=713, max=50934, avg=12442.04, stdev=7368.51 00:16:48.877 clat percentiles (usec): 00:16:48.877 | 1.00th=[ 3654], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 7963], 00:16:48.877 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10159], 00:16:48.877 | 70.00th=[13435], 80.00th=[17171], 90.00th=[21627], 95.00th=[21627], 00:16:48.877 | 99.00th=[41681], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:16:48.877 | 99.99th=[51119] 00:16:48.877 bw ( KiB/s): min=16384, max=24056, per=39.83%, avg=20220.00, stdev=5424.92, samples=2 00:16:48.877 iops : min= 4096, max= 6014, avg=5055.00, stdev=1356.23, samples=2 00:16:48.877 lat (usec) : 750=0.03% 00:16:48.877 lat (msec) : 2=0.05%, 4=1.06%, 10=48.24%, 20=33.65%, 50=16.72% 00:16:48.877 lat (msec) : 100=0.25% 00:16:48.877 cpu : usr=6.44%, sys=5.45%, ctx=430, majf=0, minf=1 00:16:48.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:48.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.877 issued rwts: total=4670,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.877 job1: (groupid=0, jobs=1): err= 0: pid=26795: Mon Jul 15 20:15:13 2024 00:16:48.877 read: IOPS=2313, BW=9255KiB/s (9477kB/s)(9708KiB/1049msec) 00:16:48.877 slat (nsec): min=1913, max=21858k, avg=177404.98, stdev=1303927.19 00:16:48.877 clat (usec): min=6449, max=72738, avg=23707.08, stdev=12758.90 00:16:48.877 lat (usec): min=6456, max=78569, avg=23884.48, stdev=12816.07 00:16:48.877 clat percentiles (usec): 00:16:48.877 | 1.00th=[ 9241], 5.00th=[12125], 10.00th=[14353], 20.00th=[14877], 00:16:48.877 | 30.00th=[15664], 40.00th=[18220], 50.00th=[20579], 60.00th=[22938], 00:16:48.877 | 70.00th=[23200], 80.00th=[27395], 90.00th=[39060], 95.00th=[55313], 00:16:48.877 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:16:48.877 | 99.99th=[72877] 00:16:48.877 write: IOPS=2440, BW=9762KiB/s (9996kB/s)(10.0MiB/1049msec); 0 zone resets 00:16:48.877 slat (usec): min=3, max=28574, avg=215.36, stdev=1440.17 00:16:48.877 clat (msec): min=3, max=109, avg=29.30, stdev=19.36 00:16:48.877 lat (msec): min=3, max=109, avg=29.52, stdev=19.47 00:16:48.877 clat percentiles (msec): 00:16:48.877 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:16:48.877 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 27], 00:16:48.877 | 70.00th=[ 28], 80.00th=[ 31], 90.00th=[ 57], 95.00th=[ 81], 00:16:48.877 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 110], 99.95th=[ 110], 00:16:48.877 | 99.99th=[ 110] 00:16:48.877 bw ( KiB/s): min= 9392, max=11088, per=20.17%, avg=10240.00, stdev=1199.25, samples=2 00:16:48.878 iops : min= 2348, max= 2772, avg=2560.00, stdev=299.81, samples=2 00:16:48.878 lat (msec) : 4=0.12%, 10=1.70%, 20=31.24%, 50=58.49%, 100=7.50% 00:16:48.878 lat (msec) : 250=0.94% 00:16:48.878 cpu : usr=2.67%, sys=3.15%, ctx=272, majf=0, minf=1 00:16:48.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:48.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.878 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.878 job2: (groupid=0, jobs=1): err= 0: pid=26820: Mon Jul 15 20:15:13 2024 00:16:48.878 read: IOPS=2309, BW=9237KiB/s (9458kB/s)(9680KiB/1048msec) 00:16:48.878 slat (nsec): min=1854, max=36246k, avg=254176.57, stdev=1824456.21 00:16:48.878 clat (msec): min=7, max=146, avg=29.46, stdev=23.97 00:16:48.878 lat (msec): min=7, max=146, avg=29.71, stdev=24.15 00:16:48.878 clat percentiles (msec): 00:16:48.878 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:16:48.878 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 23], 60.00th=[ 24], 00:16:48.878 | 70.00th=[ 27], 80.00th=[ 34], 90.00th=[ 71], 95.00th=[ 79], 00:16:48.878 | 99.00th=[ 132], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:16:48.878 | 99.99th=[ 146] 00:16:48.878 write: IOPS=2442, BW=9771KiB/s (10.0MB/s)(10.0MiB/1048msec); 0 zone resets 00:16:48.878 slat (usec): min=3, max=21562, avg=144.35, stdev=876.47 00:16:48.878 clat (usec): min=1719, max=146388, avg=24043.20, stdev=15589.48 00:16:48.878 lat (usec): min=1731, max=146404, avg=24187.54, stdev=15636.64 00:16:48.878 clat percentiles (msec): 00:16:48.878 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:16:48.878 | 30.00th=[ 14], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 24], 00:16:48.878 | 70.00th=[ 26], 80.00th=[ 29], 90.00th=[ 39], 95.00th=[ 61], 00:16:48.878 | 99.00th=[ 88], 99.50th=[ 99], 99.90th=[ 112], 99.95th=[ 146], 00:16:48.878 | 99.99th=[ 146] 00:16:48.878 bw ( KiB/s): min= 7824, max=12656, per=20.17%, avg=10240.00, stdev=3416.74, samples=2 00:16:48.878 iops : min= 1956, max= 3164, avg=2560.00, stdev=854.18, samples=2 00:16:48.878 lat (msec) : 2=0.16%, 4=0.12%, 10=2.27%, 20=38.31%, 50=49.92% 00:16:48.878 lat (msec) : 100=7.79%, 250=1.43% 00:16:48.878 cpu : usr=2.48%, sys=3.53%, ctx=263, majf=0, minf=1 00:16:48.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:48.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.878 issued rwts: total=2420,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.878 job3: (groupid=0, jobs=1): err= 0: pid=26828: Mon Jul 15 20:15:13 2024 00:16:48.878 read: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1005msec) 00:16:48.878 slat (usec): min=2, max=16924, avg=192.32, stdev=1142.76 00:16:48.878 clat (usec): min=1382, max=59062, avg=24228.44, stdev=11639.88 00:16:48.878 lat (usec): min=4666, max=59074, avg=24420.77, stdev=11741.14 00:16:48.878 clat percentiles (usec): 00:16:48.878 | 1.00th=[ 4883], 5.00th=[14353], 10.00th=[15139], 20.00th=[16188], 00:16:48.878 | 30.00th=[16909], 40.00th=[17695], 50.00th=[17695], 60.00th=[18744], 00:16:48.878 | 70.00th=[30802], 80.00th=[37487], 90.00th=[43779], 95.00th=[45876], 00:16:48.878 | 99.00th=[53216], 99.50th=[56361], 99.90th=[58983], 99.95th=[58983], 00:16:48.878 | 99.99th=[58983] 00:16:48.878 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:16:48.878 slat (usec): min=3, max=13209, avg=150.45, stdev=926.11 00:16:48.878 clat (usec): min=488, max=52004, avg=20318.96, stdev=7684.96 00:16:48.878 lat (usec): min=1281, max=52027, avg=20469.40, stdev=7773.03 00:16:48.878 clat percentiles (usec): 00:16:48.878 | 1.00th=[ 3523], 5.00th=[ 8455], 10.00th=[15008], 20.00th=[15926], 00:16:48.878 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[18220], 00:16:48.878 | 70.00th=[21890], 80.00th=[26608], 90.00th=[31589], 95.00th=[32637], 00:16:48.878 | 99.00th=[41681], 99.50th=[42206], 99.90th=[50594], 99.95th=[50594], 00:16:48.878 | 99.99th=[52167] 00:16:48.878 bw ( KiB/s): min= 8192, max=16040, per=23.87%, avg=12116.00, stdev=5549.37, samples=2 00:16:48.878 iops : min= 2048, max= 4010, avg=3029.00, stdev=1387.34, samples=2 00:16:48.878 lat (usec) : 500=0.02% 00:16:48.878 lat (msec) : 2=0.33%, 4=0.44%, 10=3.27%, 20=61.29%, 50=33.29% 00:16:48.878 lat (msec) : 100=1.36% 00:16:48.878 cpu : usr=2.49%, sys=5.08%, ctx=272, majf=0, minf=1 00:16:48.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:48.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.878 issued rwts: total=2645,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.878 00:16:48.878 Run status group 0 (all jobs): 00:16:48.878 READ: bw=45.3MiB/s (47.5MB/s), 9237KiB/s-18.1MiB/s (9458kB/s-18.9MB/s), io=47.5MiB (49.8MB), run=1005-1049msec 00:16:48.878 WRITE: bw=49.6MiB/s (52.0MB/s), 9762KiB/s-19.8MiB/s (9996kB/s-20.8MB/s), io=52.0MiB (54.5MB), run=1005-1049msec 00:16:48.878 00:16:48.878 Disk stats (read/write): 00:16:48.878 nvme0n1: ios=3730/4096, merge=0/0, ticks=36724/34454, in_queue=71178, util=91.68% 00:16:48.878 nvme0n2: ios=1800/2048, merge=0/0, ticks=38320/65334, in_queue=103654, util=91.73% 00:16:48.878 nvme0n3: ios=2105/2487, merge=0/0, ticks=44101/54682, in_queue=98783, util=93.11% 00:16:48.878 nvme0n4: ios=2091/2199, merge=0/0, ticks=19077/15652, in_queue=34729, util=97.21% 00:16:48.878 20:15:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:48.878 20:15:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=27011 00:16:48.878 20:15:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:48.878 20:15:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:48.878 [global] 00:16:48.878 thread=1 00:16:48.878 invalidate=1 00:16:48.878 rw=read 00:16:48.878 time_based=1 00:16:48.878 runtime=10 00:16:48.878 ioengine=libaio 00:16:48.878 direct=1 00:16:48.878 bs=4096 00:16:48.878 iodepth=1 00:16:48.878 norandommap=1 00:16:48.878 numjobs=1 00:16:48.878 00:16:48.878 [job0] 00:16:48.878 filename=/dev/nvme0n1 00:16:48.878 [job1] 00:16:48.878 filename=/dev/nvme0n2 00:16:48.878 [job2] 00:16:48.878 filename=/dev/nvme0n3 00:16:48.878 [job3] 00:16:48.878 filename=/dev/nvme0n4 00:16:48.878 Could not set queue depth (nvme0n1) 00:16:48.878 Could not set queue depth (nvme0n2) 00:16:48.878 Could not set queue depth (nvme0n3) 00:16:48.878 Could not set queue depth (nvme0n4) 00:16:49.143 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.143 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.143 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.143 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.143 fio-3.35 00:16:49.143 Starting 4 threads 00:16:51.669 20:15:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:51.927 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=31023104, buflen=4096 00:16:51.927 fio: pid=27314, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:51.927 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:52.185 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.185 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:52.185 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=290816, buflen=4096 00:16:52.185 fio: pid=27306, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.444 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=39481344, buflen=4096 00:16:52.444 fio: pid=27278, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.444 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.444 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:52.444 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6340608, buflen=4096 00:16:52.444 fio: pid=27290, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.444 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.444 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:52.744 00:16:52.744 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=27278: Mon Jul 15 20:15:17 2024 00:16:52.744 read: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(37.7MiB/3125msec) 00:16:52.744 slat (usec): min=6, max=15467, avg=10.61, stdev=196.17 00:16:52.744 clat (usec): min=229, max=1567, avg=309.35, stdev=37.75 00:16:52.744 lat (usec): min=245, max=15971, avg=319.95, stdev=202.39 00:16:52.744 clat percentiles (usec): 00:16:52.744 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:16:52.744 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:16:52.744 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 343], 00:16:52.744 | 99.00th=[ 388], 99.50th=[ 433], 99.90th=[ 717], 99.95th=[ 1221], 00:16:52.744 | 99.99th=[ 1565] 00:16:52.744 bw ( KiB/s): min=11856, max=12768, per=54.86%, avg=12429.50, stdev=398.62, samples=6 00:16:52.744 iops : min= 2964, max= 3192, avg=3107.67, stdev=99.90, samples=6 00:16:52.744 lat (usec) : 250=0.31%, 500=99.47%, 750=0.11%, 1000=0.03% 00:16:52.744 lat (msec) : 2=0.06% 00:16:52.744 cpu : usr=1.89%, sys=4.74%, ctx=9642, majf=0, minf=1 00:16:52.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 issued rwts: total=9640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.744 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=27290: Mon Jul 15 20:15:17 2024 00:16:52.744 read: IOPS=465, BW=1862KiB/s (1907kB/s)(6192KiB/3325msec) 00:16:52.744 slat (usec): min=6, max=11768, avg=15.75, stdev=298.83 00:16:52.744 clat (usec): min=316, max=41973, avg=2116.63, stdev=8274.50 00:16:52.744 lat (usec): min=323, max=52928, avg=2132.38, stdev=8317.41 00:16:52.744 clat percentiles (usec): 00:16:52.744 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:16:52.744 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:16:52.744 | 70.00th=[ 367], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 404], 00:16:52.744 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:16:52.744 | 99.99th=[42206] 00:16:52.744 bw ( KiB/s): min= 96, max= 9512, per=9.06%, avg=2053.50, stdev=3769.69, samples=6 00:16:52.744 iops : min= 24, max= 2378, avg=513.33, stdev=942.45, samples=6 00:16:52.744 lat (usec) : 500=95.48%, 750=0.13% 00:16:52.744 lat (msec) : 50=4.33% 00:16:52.744 cpu : usr=0.27%, sys=0.75%, ctx=1552, majf=0, minf=1 00:16:52.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 issued rwts: total=1549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.744 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=27306: Mon Jul 15 20:15:17 2024 00:16:52.744 read: IOPS=24, BW=98.1KiB/s (100kB/s)(284KiB/2894msec) 00:16:52.744 slat (nsec): min=12410, max=36464, avg=23881.83, stdev=2374.99 00:16:52.744 clat (usec): min=591, max=42069, avg=40431.15, stdev=4799.43 00:16:52.744 lat (usec): min=627, max=42095, avg=40455.02, stdev=4797.92 00:16:52.744 clat percentiles (usec): 00:16:52.744 | 1.00th=[ 594], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:52.744 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:52.744 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:52.744 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:52.744 | 99.99th=[42206] 00:16:52.744 bw ( KiB/s): min= 96, max= 104, per=0.44%, avg=99.20, stdev= 4.38, samples=5 00:16:52.744 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:16:52.744 lat (usec) : 750=1.39% 00:16:52.744 lat (msec) : 50=97.22% 00:16:52.744 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:16:52.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.744 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=27314: Mon Jul 15 20:15:17 2024 00:16:52.744 read: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(29.6MiB/2597msec) 00:16:52.744 slat (nsec): min=6907, max=43453, avg=7927.66, stdev=1393.76 00:16:52.744 clat (usec): min=264, max=823, avg=329.55, stdev=27.70 00:16:52.744 lat (usec): min=271, max=830, avg=337.47, stdev=27.71 00:16:52.744 clat percentiles (usec): 00:16:52.744 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:16:52.744 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:16:52.744 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 375], 00:16:52.744 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 457], 99.95th=[ 742], 00:16:52.744 | 99.99th=[ 824] 00:16:52.744 bw ( KiB/s): min=11016, max=12384, per=52.09%, avg=11800.00, stdev=513.53, samples=5 00:16:52.744 iops : min= 2754, max= 3096, avg=2950.00, stdev=128.38, samples=5 00:16:52.744 lat (usec) : 500=99.93%, 750=0.01%, 1000=0.04% 00:16:52.744 cpu : usr=1.54%, sys=4.78%, ctx=7575, majf=0, minf=2 00:16:52.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.744 issued rwts: total=7575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.744 00:16:52.744 Run status group 0 (all jobs): 00:16:52.744 READ: bw=22.1MiB/s (23.2MB/s), 98.1KiB/s-12.0MiB/s (100kB/s-12.6MB/s), io=73.6MiB (77.1MB), run=2597-3325msec 00:16:52.744 00:16:52.744 Disk stats (read/write): 00:16:52.744 nvme0n1: ios=9559/0, merge=0/0, ticks=2863/0, in_queue=2863, util=94.14% 00:16:52.744 nvme0n2: ios=1542/0, merge=0/0, ticks=3011/0, in_queue=3011, util=95.13% 00:16:52.744 nvme0n3: ios=109/0, merge=0/0, ticks=3671/0, in_queue=3671, util=100.00% 00:16:52.744 nvme0n4: ios=7574/0, merge=0/0, ticks=2389/0, in_queue=2389, util=96.41% 00:16:52.744 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.744 20:15:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:53.002 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.002 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:53.002 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.002 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:53.259 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.259 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 27011 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:53.517 nvmf hotplug test: fio failed as expected 00:16:53.517 20:15:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.775 rmmod nvme_tcp 00:16:53.775 rmmod nvme_fabrics 00:16:53.775 rmmod nvme_keyring 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 23331 ']' 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 23331 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 23331 ']' 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 23331 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.775 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 23331 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 23331' 00:16:54.034 killing process with pid 23331 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 23331 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 23331 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.034 20:15:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.567 20:15:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.567 00:16:56.567 real 0m27.098s 00:16:56.567 user 2m19.999s 00:16:56.567 sys 0m8.066s 00:16:56.567 20:15:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.567 20:15:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.567 ************************************ 00:16:56.567 END TEST nvmf_fio_target 00:16:56.567 ************************************ 00:16:56.567 20:15:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.567 20:15:21 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:56.567 20:15:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:56.567 20:15:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.567 20:15:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.567 ************************************ 00:16:56.567 START TEST nvmf_bdevio 00:16:56.567 ************************************ 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:56.567 * Looking for test storage... 00:16:56.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.567 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.568 20:15:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:01.840 20:15:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:01.840 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:01.840 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.840 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:01.841 Found net devices under 0000:af:00.0: cvl_0_0 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:01.841 Found net devices under 0000:af:00.1: cvl_0_1 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.841 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:17:02.100 00:17:02.100 --- 10.0.0.2 ping statistics --- 00:17:02.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.100 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:17:02.100 00:17:02.100 --- 10.0.0.1 ping statistics --- 00:17:02.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.100 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=31693 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 31693 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 31693 ']' 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.100 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.100 [2024-07-15 20:15:27.385328] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:17:02.100 [2024-07-15 20:15:27.385389] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.100 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.359 [2024-07-15 20:15:27.462580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.359 [2024-07-15 20:15:27.554768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.359 [2024-07-15 20:15:27.554810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.359 [2024-07-15 20:15:27.554820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.359 [2024-07-15 20:15:27.554828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.359 [2024-07-15 20:15:27.554835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.359 [2024-07-15 20:15:27.554950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.359 [2024-07-15 20:15:27.555068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:02.359 [2024-07-15 20:15:27.555182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:02.359 [2024-07-15 20:15:27.555183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.359 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.359 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.360 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.360 [2024-07-15 20:15:27.707499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.618 Malloc0 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:02.618 [2024-07-15 20:15:27.763837] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.618 { 00:17:02.618 "params": { 00:17:02.618 "name": "Nvme$subsystem", 00:17:02.618 "trtype": "$TEST_TRANSPORT", 00:17:02.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.618 "adrfam": "ipv4", 00:17:02.618 "trsvcid": "$NVMF_PORT", 00:17:02.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.618 "hdgst": ${hdgst:-false}, 00:17:02.618 "ddgst": ${ddgst:-false} 00:17:02.618 }, 00:17:02.618 "method": "bdev_nvme_attach_controller" 00:17:02.618 } 00:17:02.618 EOF 00:17:02.618 )") 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:02.618 20:15:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.618 "params": { 00:17:02.618 "name": "Nvme1", 00:17:02.618 "trtype": "tcp", 00:17:02.618 "traddr": "10.0.0.2", 00:17:02.618 "adrfam": "ipv4", 00:17:02.618 "trsvcid": "4420", 00:17:02.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.618 "hdgst": false, 00:17:02.618 "ddgst": false 00:17:02.618 }, 00:17:02.618 "method": "bdev_nvme_attach_controller" 00:17:02.618 }' 00:17:02.618 [2024-07-15 20:15:27.817460] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:17:02.618 [2024-07-15 20:15:27.817522] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31870 ] 00:17:02.618 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.618 [2024-07-15 20:15:27.900755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.876 [2024-07-15 20:15:27.991127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.876 [2024-07-15 20:15:27.991228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.876 [2024-07-15 20:15:27.991229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.134 I/O targets: 00:17:03.134 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:03.134 00:17:03.134 00:17:03.134 CUnit - A unit testing framework for C - Version 2.1-3 00:17:03.134 http://cunit.sourceforge.net/ 00:17:03.134 00:17:03.134 00:17:03.134 Suite: bdevio tests on: Nvme1n1 00:17:03.134 Test: blockdev write read block ...passed 00:17:03.134 Test: blockdev write zeroes read block ...passed 00:17:03.134 Test: blockdev write zeroes read no split ...passed 00:17:03.134 Test: blockdev write zeroes read split ...passed 00:17:03.134 Test: blockdev write zeroes read split partial ...passed 00:17:03.134 Test: blockdev reset ...[2024-07-15 20:15:28.431571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.134 [2024-07-15 20:15:28.431645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4bc80 (9): Bad file descriptor 00:17:03.393 [2024-07-15 20:15:28.542625] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:03.393 passed 00:17:03.393 Test: blockdev write read 8 blocks ...passed 00:17:03.393 Test: blockdev write read size > 128k ...passed 00:17:03.393 Test: blockdev write read invalid size ...passed 00:17:03.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.393 Test: blockdev write read max offset ...passed 00:17:03.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.393 Test: blockdev writev readv 8 blocks ...passed 00:17:03.393 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.652 Test: blockdev writev readv block ...passed 00:17:03.652 Test: blockdev writev readv size > 128k ...passed 00:17:03.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.652 Test: blockdev comparev and writev ...[2024-07-15 20:15:28.761435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.761462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.761474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.761481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.762045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.762055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.762065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.762071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.762544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.762554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.762565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.762571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.763062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.763071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:03.652 [2024-07-15 20:15:28.763081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:03.652 [2024-07-15 20:15:28.763088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:03.652 passed 00:17:03.652 Test: blockdev nvme passthru rw ...passed 00:17:03.653 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:15:28.846751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.653 [2024-07-15 20:15:28.846765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:03.653 [2024-07-15 20:15:28.846960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.653 [2024-07-15 20:15:28.846968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:03.653 [2024-07-15 20:15:28.847169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.653 [2024-07-15 20:15:28.847177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:03.653 [2024-07-15 20:15:28.847389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:03.653 [2024-07-15 20:15:28.847397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:03.653 passed 00:17:03.653 Test: blockdev nvme admin passthru ...passed 00:17:03.653 Test: blockdev copy ...passed 00:17:03.653 00:17:03.653 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.653 suites 1 1 n/a 0 0 00:17:03.653 tests 23 23 23 0 0 00:17:03.653 asserts 152 152 152 0 n/a 00:17:03.653 00:17:03.653 Elapsed time = 1.181 seconds 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.917 rmmod nvme_tcp 00:17:03.917 rmmod nvme_fabrics 00:17:03.917 rmmod nvme_keyring 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 31693 ']' 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 31693 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 31693 ']' 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 31693 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 31693 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 31693' 00:17:03.917 killing process with pid 31693 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 31693 00:17:03.917 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 31693 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.242 20:15:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.160 20:15:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.160 00:17:06.160 real 0m10.015s 00:17:06.160 user 0m11.522s 00:17:06.160 sys 0m4.873s 00:17:06.160 20:15:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.160 20:15:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:06.160 ************************************ 00:17:06.160 END TEST nvmf_bdevio 00:17:06.160 ************************************ 00:17:06.419 20:15:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.419 20:15:31 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:06.419 20:15:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.419 20:15:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.419 20:15:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.419 ************************************ 00:17:06.419 START TEST nvmf_auth_target 00:17:06.419 ************************************ 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:06.419 * Looking for test storage... 00:17:06.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.419 20:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:11.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:11.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:11.690 Found net devices under 0000:af:00.0: cvl_0_0 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:11.690 Found net devices under 0000:af:00.1: cvl_0_1 00:17:11.690 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:11.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:17:11.691 00:17:11.691 --- 10.0.0.2 ping statistics --- 00:17:11.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.691 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:17:11.691 00:17:11.691 --- 10.0.0.1 ping statistics --- 00:17:11.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.691 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=35687 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 35687 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 35687 ']' 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.691 20:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=35733 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9c48c974aa8cbd19321d92aae3877eac6d5078a049c100b4 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Dpx 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9c48c974aa8cbd19321d92aae3877eac6d5078a049c100b4 0 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9c48c974aa8cbd19321d92aae3877eac6d5078a049c100b4 0 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9c48c974aa8cbd19321d92aae3877eac6d5078a049c100b4 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Dpx 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Dpx 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Dpx 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5ea98b50cd21120dd94d65f7372ba8adab3c5908697ff690b5d35f0ac793ac2b 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vKm 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5ea98b50cd21120dd94d65f7372ba8adab3c5908697ff690b5d35f0ac793ac2b 3 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5ea98b50cd21120dd94d65f7372ba8adab3c5908697ff690b5d35f0ac793ac2b 3 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5ea98b50cd21120dd94d65f7372ba8adab3c5908697ff690b5d35f0ac793ac2b 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vKm 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vKm 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.vKm 00:17:12.630 20:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c680ff538c8146a6df254084fb64b25a 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fpV 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c680ff538c8146a6df254084fb64b25a 1 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c680ff538c8146a6df254084fb64b25a 1 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c680ff538c8146a6df254084fb64b25a 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:12.890 20:15:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fpV 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fpV 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.fpV 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc99500292470806fd2c5854a4e267d53e318b709d92438e 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MHh 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc99500292470806fd2c5854a4e267d53e318b709d92438e 2 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc99500292470806fd2c5854a4e267d53e318b709d92438e 2 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc99500292470806fd2c5854a4e267d53e318b709d92438e 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MHh 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MHh 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.MHh 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e20493ca77eb67bbd3803076395f1f38c8330d5633adaa52 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.M5y 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e20493ca77eb67bbd3803076395f1f38c8330d5633adaa52 2 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e20493ca77eb67bbd3803076395f1f38c8330d5633adaa52 2 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.890 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e20493ca77eb67bbd3803076395f1f38c8330d5633adaa52 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.M5y 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.M5y 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.M5y 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9a24445fff627afabd6a141ccdabda46 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gqu 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9a24445fff627afabd6a141ccdabda46 1 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9a24445fff627afabd6a141ccdabda46 1 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9a24445fff627afabd6a141ccdabda46 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gqu 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gqu 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Gqu 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:12.891 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:13.148 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=81810b748af3930211b5eaa00d7662029366c424dbbecb58d19002138512e192 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7Ip 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 81810b748af3930211b5eaa00d7662029366c424dbbecb58d19002138512e192 3 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 81810b748af3930211b5eaa00d7662029366c424dbbecb58d19002138512e192 3 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=81810b748af3930211b5eaa00d7662029366c424dbbecb58d19002138512e192 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7Ip 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7Ip 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.7Ip 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 35687 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 35687 ']' 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.149 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.406 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.406 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:13.406 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 35733 /var/tmp/host.sock 00:17:13.406 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 35733 ']' 00:17:13.407 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:13.407 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.407 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:13.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:13.407 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.407 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.664 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.664 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:13.664 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:13.664 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.664 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.664 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Dpx 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Dpx 00:17:13.665 20:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Dpx 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.vKm ]] 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vKm 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vKm 00:17:13.923 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vKm 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fpV 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fpV 00:17:14.182 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fpV 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.MHh ]] 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MHh 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MHh 00:17:14.439 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MHh 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.M5y 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.M5y 00:17:14.744 20:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.M5y 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Gqu ]] 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqu 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqu 00:17:14.744 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqu 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7Ip 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.7Ip 00:17:15.003 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.7Ip 00:17:15.261 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:15.261 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:15.261 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.261 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.261 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.261 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.519 20:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.777 00:17:15.777 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.777 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.777 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.036 { 00:17:16.036 "cntlid": 1, 00:17:16.036 "qid": 0, 00:17:16.036 "state": "enabled", 00:17:16.036 "thread": "nvmf_tgt_poll_group_000", 00:17:16.036 "listen_address": { 00:17:16.036 "trtype": "TCP", 00:17:16.036 "adrfam": "IPv4", 00:17:16.036 "traddr": "10.0.0.2", 00:17:16.036 "trsvcid": "4420" 00:17:16.036 }, 00:17:16.036 "peer_address": { 00:17:16.036 "trtype": "TCP", 00:17:16.036 "adrfam": "IPv4", 00:17:16.036 "traddr": "10.0.0.1", 00:17:16.036 "trsvcid": "36632" 00:17:16.036 }, 00:17:16.036 "auth": { 00:17:16.036 "state": "completed", 00:17:16.036 "digest": "sha256", 00:17:16.036 "dhgroup": "null" 00:17:16.036 } 00:17:16.036 } 00:17:16.036 ]' 00:17:16.036 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.294 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.553 20:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:17:17.121 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.121 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:17.121 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.121 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.380 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.639 20:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.639 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.639 20:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.898 00:17:17.898 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.898 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.898 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.164 { 00:17:18.164 "cntlid": 3, 00:17:18.164 "qid": 0, 00:17:18.164 "state": "enabled", 00:17:18.164 "thread": "nvmf_tgt_poll_group_000", 00:17:18.164 "listen_address": { 00:17:18.164 "trtype": "TCP", 00:17:18.164 "adrfam": "IPv4", 00:17:18.164 "traddr": "10.0.0.2", 00:17:18.164 "trsvcid": "4420" 00:17:18.164 }, 00:17:18.164 "peer_address": { 00:17:18.164 "trtype": "TCP", 00:17:18.164 "adrfam": "IPv4", 00:17:18.164 "traddr": "10.0.0.1", 00:17:18.164 "trsvcid": "40914" 00:17:18.164 }, 00:17:18.164 "auth": { 00:17:18.164 "state": "completed", 00:17:18.164 "digest": "sha256", 00:17:18.164 "dhgroup": "null" 00:17:18.164 } 00:17:18.164 } 00:17:18.164 ]' 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.164 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.425 20:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.360 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.618 20:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.877 00:17:19.877 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.877 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.877 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.133 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.133 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.133 20:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.133 20:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.133 20:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.133 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.133 { 00:17:20.133 "cntlid": 5, 00:17:20.133 "qid": 0, 00:17:20.133 "state": "enabled", 00:17:20.134 "thread": "nvmf_tgt_poll_group_000", 00:17:20.134 "listen_address": { 00:17:20.134 "trtype": "TCP", 00:17:20.134 "adrfam": "IPv4", 00:17:20.134 "traddr": "10.0.0.2", 00:17:20.134 "trsvcid": "4420" 00:17:20.134 }, 00:17:20.134 "peer_address": { 00:17:20.134 "trtype": "TCP", 00:17:20.134 "adrfam": "IPv4", 00:17:20.134 "traddr": "10.0.0.1", 00:17:20.134 "trsvcid": "40924" 00:17:20.134 }, 00:17:20.134 "auth": { 00:17:20.134 "state": "completed", 00:17:20.134 "digest": "sha256", 00:17:20.134 "dhgroup": "null" 00:17:20.134 } 00:17:20.134 } 00:17:20.134 ]' 00:17:20.134 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.134 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.134 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.134 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:20.134 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.391 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.391 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.391 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.648 20:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:17:21.213 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.471 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:21.471 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.471 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.471 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.471 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.472 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.472 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.731 20:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.990 00:17:21.990 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.990 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.990 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.248 { 00:17:22.248 "cntlid": 7, 00:17:22.248 "qid": 0, 00:17:22.248 "state": "enabled", 00:17:22.248 "thread": "nvmf_tgt_poll_group_000", 00:17:22.248 "listen_address": { 00:17:22.248 "trtype": "TCP", 00:17:22.248 "adrfam": "IPv4", 00:17:22.248 "traddr": "10.0.0.2", 00:17:22.248 "trsvcid": "4420" 00:17:22.248 }, 00:17:22.248 "peer_address": { 00:17:22.248 "trtype": "TCP", 00:17:22.248 "adrfam": "IPv4", 00:17:22.248 "traddr": "10.0.0.1", 00:17:22.248 "trsvcid": "40954" 00:17:22.248 }, 00:17:22.248 "auth": { 00:17:22.248 "state": "completed", 00:17:22.248 "digest": "sha256", 00:17:22.248 "dhgroup": "null" 00:17:22.248 } 00:17:22.248 } 00:17:22.248 ]' 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.248 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.507 20:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.442 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.701 20:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.959 00:17:23.959 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.959 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.959 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.216 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.216 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.216 20:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.216 20:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.216 20:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.216 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.216 { 00:17:24.216 "cntlid": 9, 00:17:24.216 "qid": 0, 00:17:24.216 "state": "enabled", 00:17:24.216 "thread": "nvmf_tgt_poll_group_000", 00:17:24.216 "listen_address": { 00:17:24.216 "trtype": "TCP", 00:17:24.216 "adrfam": "IPv4", 00:17:24.216 "traddr": "10.0.0.2", 00:17:24.216 "trsvcid": "4420" 00:17:24.216 }, 00:17:24.216 "peer_address": { 00:17:24.216 "trtype": "TCP", 00:17:24.217 "adrfam": "IPv4", 00:17:24.217 "traddr": "10.0.0.1", 00:17:24.217 "trsvcid": "40974" 00:17:24.217 }, 00:17:24.217 "auth": { 00:17:24.217 "state": "completed", 00:17:24.217 "digest": "sha256", 00:17:24.217 "dhgroup": "ffdhe2048" 00:17:24.217 } 00:17:24.217 } 00:17:24.217 ]' 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.217 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.474 20:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.409 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.667 20:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.925 00:17:25.925 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.925 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.925 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.184 { 00:17:26.184 "cntlid": 11, 00:17:26.184 "qid": 0, 00:17:26.184 "state": "enabled", 00:17:26.184 "thread": "nvmf_tgt_poll_group_000", 00:17:26.184 "listen_address": { 00:17:26.184 "trtype": "TCP", 00:17:26.184 "adrfam": "IPv4", 00:17:26.184 "traddr": "10.0.0.2", 00:17:26.184 "trsvcid": "4420" 00:17:26.184 }, 00:17:26.184 "peer_address": { 00:17:26.184 "trtype": "TCP", 00:17:26.184 "adrfam": "IPv4", 00:17:26.184 "traddr": "10.0.0.1", 00:17:26.184 "trsvcid": "40992" 00:17:26.184 }, 00:17:26.184 "auth": { 00:17:26.184 "state": "completed", 00:17:26.184 "digest": "sha256", 00:17:26.184 "dhgroup": "ffdhe2048" 00:17:26.184 } 00:17:26.184 } 00:17:26.184 ]' 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.184 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.443 20:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.378 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.637 20:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.897 00:17:27.897 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.897 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.897 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.154 { 00:17:28.154 "cntlid": 13, 00:17:28.154 "qid": 0, 00:17:28.154 "state": "enabled", 00:17:28.154 "thread": "nvmf_tgt_poll_group_000", 00:17:28.154 "listen_address": { 00:17:28.154 "trtype": "TCP", 00:17:28.154 "adrfam": "IPv4", 00:17:28.154 "traddr": "10.0.0.2", 00:17:28.154 "trsvcid": "4420" 00:17:28.154 }, 00:17:28.154 "peer_address": { 00:17:28.154 "trtype": "TCP", 00:17:28.154 "adrfam": "IPv4", 00:17:28.154 "traddr": "10.0.0.1", 00:17:28.154 "trsvcid": "45342" 00:17:28.154 }, 00:17:28.154 "auth": { 00:17:28.154 "state": "completed", 00:17:28.154 "digest": "sha256", 00:17:28.154 "dhgroup": "ffdhe2048" 00:17:28.154 } 00:17:28.154 } 00:17:28.154 ]' 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.154 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.411 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.411 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.411 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.411 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.411 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.669 20:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.236 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.494 20:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.752 00:17:29.752 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.752 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.752 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.027 { 00:17:30.027 "cntlid": 15, 00:17:30.027 "qid": 0, 00:17:30.027 "state": "enabled", 00:17:30.027 "thread": "nvmf_tgt_poll_group_000", 00:17:30.027 "listen_address": { 00:17:30.027 "trtype": "TCP", 00:17:30.027 "adrfam": "IPv4", 00:17:30.027 "traddr": "10.0.0.2", 00:17:30.027 "trsvcid": "4420" 00:17:30.027 }, 00:17:30.027 "peer_address": { 00:17:30.027 "trtype": "TCP", 00:17:30.027 "adrfam": "IPv4", 00:17:30.027 "traddr": "10.0.0.1", 00:17:30.027 "trsvcid": "45366" 00:17:30.027 }, 00:17:30.027 "auth": { 00:17:30.027 "state": "completed", 00:17:30.027 "digest": "sha256", 00:17:30.027 "dhgroup": "ffdhe2048" 00:17:30.027 } 00:17:30.027 } 00:17:30.027 ]' 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.027 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.285 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.285 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.285 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.285 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.285 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.543 20:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.109 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.367 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.626 00:17:31.626 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.626 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.626 20:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.884 { 00:17:31.884 "cntlid": 17, 00:17:31.884 "qid": 0, 00:17:31.884 "state": "enabled", 00:17:31.884 "thread": "nvmf_tgt_poll_group_000", 00:17:31.884 "listen_address": { 00:17:31.884 "trtype": "TCP", 00:17:31.884 "adrfam": "IPv4", 00:17:31.884 "traddr": "10.0.0.2", 00:17:31.884 "trsvcid": "4420" 00:17:31.884 }, 00:17:31.884 "peer_address": { 00:17:31.884 "trtype": "TCP", 00:17:31.884 "adrfam": "IPv4", 00:17:31.884 "traddr": "10.0.0.1", 00:17:31.884 "trsvcid": "45388" 00:17:31.884 }, 00:17:31.884 "auth": { 00:17:31.884 "state": "completed", 00:17:31.884 "digest": "sha256", 00:17:31.884 "dhgroup": "ffdhe3072" 00:17:31.884 } 00:17:31.884 } 00:17:31.884 ]' 00:17:31.884 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.143 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.400 20:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.337 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.905 00:17:33.905 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.905 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.905 20:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.905 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.905 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.905 20:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.905 20:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.167 { 00:17:34.167 "cntlid": 19, 00:17:34.167 "qid": 0, 00:17:34.167 "state": "enabled", 00:17:34.167 "thread": "nvmf_tgt_poll_group_000", 00:17:34.167 "listen_address": { 00:17:34.167 "trtype": "TCP", 00:17:34.167 "adrfam": "IPv4", 00:17:34.167 "traddr": "10.0.0.2", 00:17:34.167 "trsvcid": "4420" 00:17:34.167 }, 00:17:34.167 "peer_address": { 00:17:34.167 "trtype": "TCP", 00:17:34.167 "adrfam": "IPv4", 00:17:34.167 "traddr": "10.0.0.1", 00:17:34.167 "trsvcid": "45424" 00:17:34.167 }, 00:17:34.167 "auth": { 00:17:34.167 "state": "completed", 00:17:34.167 "digest": "sha256", 00:17:34.167 "dhgroup": "ffdhe3072" 00:17:34.167 } 00:17:34.167 } 00:17:34.167 ]' 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.167 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.450 20:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.430 20:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.689 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.947 20:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.205 { 00:17:36.205 "cntlid": 21, 00:17:36.205 "qid": 0, 00:17:36.205 "state": "enabled", 00:17:36.205 "thread": "nvmf_tgt_poll_group_000", 00:17:36.205 "listen_address": { 00:17:36.205 "trtype": "TCP", 00:17:36.205 "adrfam": "IPv4", 00:17:36.205 "traddr": "10.0.0.2", 00:17:36.205 "trsvcid": "4420" 00:17:36.205 }, 00:17:36.205 "peer_address": { 00:17:36.205 "trtype": "TCP", 00:17:36.205 "adrfam": "IPv4", 00:17:36.205 "traddr": "10.0.0.1", 00:17:36.205 "trsvcid": "45464" 00:17:36.205 }, 00:17:36.205 "auth": { 00:17:36.205 "state": "completed", 00:17:36.205 "digest": "sha256", 00:17:36.205 "dhgroup": "ffdhe3072" 00:17:36.205 } 00:17:36.205 } 00:17:36.205 ]' 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.205 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.464 20:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.400 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.401 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:37.401 20:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.401 20:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.401 20:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.401 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.401 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.659 00:17:37.659 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.659 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.659 20:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.917 { 00:17:37.917 "cntlid": 23, 00:17:37.917 "qid": 0, 00:17:37.917 "state": "enabled", 00:17:37.917 "thread": "nvmf_tgt_poll_group_000", 00:17:37.917 "listen_address": { 00:17:37.917 "trtype": "TCP", 00:17:37.917 "adrfam": "IPv4", 00:17:37.917 "traddr": "10.0.0.2", 00:17:37.917 "trsvcid": "4420" 00:17:37.917 }, 00:17:37.917 "peer_address": { 00:17:37.917 "trtype": "TCP", 00:17:37.917 "adrfam": "IPv4", 00:17:37.917 "traddr": "10.0.0.1", 00:17:37.917 "trsvcid": "45772" 00:17:37.917 }, 00:17:37.917 "auth": { 00:17:37.917 "state": "completed", 00:17:37.917 "digest": "sha256", 00:17:37.917 "dhgroup": "ffdhe3072" 00:17:37.917 } 00:17:37.917 } 00:17:37.917 ]' 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.917 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.175 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.175 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.175 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.175 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.175 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.433 20:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.000 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.001 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.260 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.828 00:17:39.828 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.828 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.828 20:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.828 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.828 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.828 20:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.828 20:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.828 20:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.828 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.828 { 00:17:39.828 "cntlid": 25, 00:17:39.828 "qid": 0, 00:17:39.828 "state": "enabled", 00:17:39.828 "thread": "nvmf_tgt_poll_group_000", 00:17:39.828 "listen_address": { 00:17:39.828 "trtype": "TCP", 00:17:39.828 "adrfam": "IPv4", 00:17:39.828 "traddr": "10.0.0.2", 00:17:39.828 "trsvcid": "4420" 00:17:39.828 }, 00:17:39.828 "peer_address": { 00:17:39.828 "trtype": "TCP", 00:17:39.828 "adrfam": "IPv4", 00:17:39.828 "traddr": "10.0.0.1", 00:17:39.828 "trsvcid": "45802" 00:17:39.828 }, 00:17:39.828 "auth": { 00:17:39.828 "state": "completed", 00:17:39.828 "digest": "sha256", 00:17:39.828 "dhgroup": "ffdhe4096" 00:17:39.828 } 00:17:39.828 } 00:17:39.828 ]' 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.086 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.345 20:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.280 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.539 20:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.797 00:17:41.797 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.797 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.797 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.056 { 00:17:42.056 "cntlid": 27, 00:17:42.056 "qid": 0, 00:17:42.056 "state": "enabled", 00:17:42.056 "thread": "nvmf_tgt_poll_group_000", 00:17:42.056 "listen_address": { 00:17:42.056 "trtype": "TCP", 00:17:42.056 "adrfam": "IPv4", 00:17:42.056 "traddr": "10.0.0.2", 00:17:42.056 "trsvcid": "4420" 00:17:42.056 }, 00:17:42.056 "peer_address": { 00:17:42.056 "trtype": "TCP", 00:17:42.056 "adrfam": "IPv4", 00:17:42.056 "traddr": "10.0.0.1", 00:17:42.056 "trsvcid": "45832" 00:17:42.056 }, 00:17:42.056 "auth": { 00:17:42.056 "state": "completed", 00:17:42.056 "digest": "sha256", 00:17:42.056 "dhgroup": "ffdhe4096" 00:17:42.056 } 00:17:42.056 } 00:17:42.056 ]' 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.056 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.315 20:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:17:43.251 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.510 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.511 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.511 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.511 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.511 20:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.511 20:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.770 20:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.770 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.770 20:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.028 00:17:44.028 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.028 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.028 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.286 { 00:17:44.286 "cntlid": 29, 00:17:44.286 "qid": 0, 00:17:44.286 "state": "enabled", 00:17:44.286 "thread": "nvmf_tgt_poll_group_000", 00:17:44.286 "listen_address": { 00:17:44.286 "trtype": "TCP", 00:17:44.286 "adrfam": "IPv4", 00:17:44.286 "traddr": "10.0.0.2", 00:17:44.286 "trsvcid": "4420" 00:17:44.286 }, 00:17:44.286 "peer_address": { 00:17:44.286 "trtype": "TCP", 00:17:44.286 "adrfam": "IPv4", 00:17:44.286 "traddr": "10.0.0.1", 00:17:44.286 "trsvcid": "45852" 00:17:44.286 }, 00:17:44.286 "auth": { 00:17:44.286 "state": "completed", 00:17:44.286 "digest": "sha256", 00:17:44.286 "dhgroup": "ffdhe4096" 00:17:44.286 } 00:17:44.286 } 00:17:44.286 ]' 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.286 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.544 20:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.481 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.739 20:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.998 00:17:45.998 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.998 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.998 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.257 { 00:17:46.257 "cntlid": 31, 00:17:46.257 "qid": 0, 00:17:46.257 "state": "enabled", 00:17:46.257 "thread": "nvmf_tgt_poll_group_000", 00:17:46.257 "listen_address": { 00:17:46.257 "trtype": "TCP", 00:17:46.257 "adrfam": "IPv4", 00:17:46.257 "traddr": "10.0.0.2", 00:17:46.257 "trsvcid": "4420" 00:17:46.257 }, 00:17:46.257 "peer_address": { 00:17:46.257 "trtype": "TCP", 00:17:46.257 "adrfam": "IPv4", 00:17:46.257 "traddr": "10.0.0.1", 00:17:46.257 "trsvcid": "45864" 00:17:46.257 }, 00:17:46.257 "auth": { 00:17:46.257 "state": "completed", 00:17:46.257 "digest": "sha256", 00:17:46.257 "dhgroup": "ffdhe4096" 00:17:46.257 } 00:17:46.257 } 00:17:46.257 ]' 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.257 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.516 20:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.451 20:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.016 00:17:48.016 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.016 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.016 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.274 { 00:17:48.274 "cntlid": 33, 00:17:48.274 "qid": 0, 00:17:48.274 "state": "enabled", 00:17:48.274 "thread": "nvmf_tgt_poll_group_000", 00:17:48.274 "listen_address": { 00:17:48.274 "trtype": "TCP", 00:17:48.274 "adrfam": "IPv4", 00:17:48.274 "traddr": "10.0.0.2", 00:17:48.274 "trsvcid": "4420" 00:17:48.274 }, 00:17:48.274 "peer_address": { 00:17:48.274 "trtype": "TCP", 00:17:48.274 "adrfam": "IPv4", 00:17:48.274 "traddr": "10.0.0.1", 00:17:48.274 "trsvcid": "60872" 00:17:48.274 }, 00:17:48.274 "auth": { 00:17:48.274 "state": "completed", 00:17:48.274 "digest": "sha256", 00:17:48.274 "dhgroup": "ffdhe6144" 00:17:48.274 } 00:17:48.274 } 00:17:48.274 ]' 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.274 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.532 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.532 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.532 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.789 20:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:17:49.721 20:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.722 20:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.979 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.236 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.496 20:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.755 { 00:17:50.755 "cntlid": 35, 00:17:50.755 "qid": 0, 00:17:50.755 "state": "enabled", 00:17:50.755 "thread": "nvmf_tgt_poll_group_000", 00:17:50.755 "listen_address": { 00:17:50.755 "trtype": "TCP", 00:17:50.755 "adrfam": "IPv4", 00:17:50.755 "traddr": "10.0.0.2", 00:17:50.755 "trsvcid": "4420" 00:17:50.755 }, 00:17:50.755 "peer_address": { 00:17:50.755 "trtype": "TCP", 00:17:50.755 "adrfam": "IPv4", 00:17:50.755 "traddr": "10.0.0.1", 00:17:50.755 "trsvcid": "60912" 00:17:50.755 }, 00:17:50.755 "auth": { 00:17:50.755 "state": "completed", 00:17:50.755 "digest": "sha256", 00:17:50.755 "dhgroup": "ffdhe6144" 00:17:50.755 } 00:17:50.755 } 00:17:50.755 ]' 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.755 20:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.014 20:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.950 20:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.950 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.540 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.540 20:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.799 20:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.799 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.799 { 00:17:52.799 "cntlid": 37, 00:17:52.799 "qid": 0, 00:17:52.799 "state": "enabled", 00:17:52.799 "thread": "nvmf_tgt_poll_group_000", 00:17:52.799 "listen_address": { 00:17:52.800 "trtype": "TCP", 00:17:52.800 "adrfam": "IPv4", 00:17:52.800 "traddr": "10.0.0.2", 00:17:52.800 "trsvcid": "4420" 00:17:52.800 }, 00:17:52.800 "peer_address": { 00:17:52.800 "trtype": "TCP", 00:17:52.800 "adrfam": "IPv4", 00:17:52.800 "traddr": "10.0.0.1", 00:17:52.800 "trsvcid": "60934" 00:17:52.800 }, 00:17:52.800 "auth": { 00:17:52.800 "state": "completed", 00:17:52.800 "digest": "sha256", 00:17:52.800 "dhgroup": "ffdhe6144" 00:17:52.800 } 00:17:52.800 } 00:17:52.800 ]' 00:17:52.800 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.800 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.800 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.800 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.800 20:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.800 20:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.800 20:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.800 20:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.058 20:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.996 20:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.255 20:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.255 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.255 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.515 00:17:54.515 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.515 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.515 20:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.774 { 00:17:54.774 "cntlid": 39, 00:17:54.774 "qid": 0, 00:17:54.774 "state": "enabled", 00:17:54.774 "thread": "nvmf_tgt_poll_group_000", 00:17:54.774 "listen_address": { 00:17:54.774 "trtype": "TCP", 00:17:54.774 "adrfam": "IPv4", 00:17:54.774 "traddr": "10.0.0.2", 00:17:54.774 "trsvcid": "4420" 00:17:54.774 }, 00:17:54.774 "peer_address": { 00:17:54.774 "trtype": "TCP", 00:17:54.774 "adrfam": "IPv4", 00:17:54.774 "traddr": "10.0.0.1", 00:17:54.774 "trsvcid": "60966" 00:17:54.774 }, 00:17:54.774 "auth": { 00:17:54.774 "state": "completed", 00:17:54.774 "digest": "sha256", 00:17:54.774 "dhgroup": "ffdhe6144" 00:17:54.774 } 00:17:54.774 } 00:17:54.774 ]' 00:17:54.774 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.031 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.289 20:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.225 20:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.162 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.162 20:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.421 { 00:17:57.421 "cntlid": 41, 00:17:57.421 "qid": 0, 00:17:57.421 "state": "enabled", 00:17:57.421 "thread": "nvmf_tgt_poll_group_000", 00:17:57.421 "listen_address": { 00:17:57.421 "trtype": "TCP", 00:17:57.421 "adrfam": "IPv4", 00:17:57.421 "traddr": "10.0.0.2", 00:17:57.421 "trsvcid": "4420" 00:17:57.421 }, 00:17:57.421 "peer_address": { 00:17:57.421 "trtype": "TCP", 00:17:57.421 "adrfam": "IPv4", 00:17:57.421 "traddr": "10.0.0.1", 00:17:57.421 "trsvcid": "54420" 00:17:57.421 }, 00:17:57.421 "auth": { 00:17:57.421 "state": "completed", 00:17:57.421 "digest": "sha256", 00:17:57.421 "dhgroup": "ffdhe8192" 00:17:57.421 } 00:17:57.421 } 00:17:57.421 ]' 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.421 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.679 20:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.616 20:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.182 00:17:59.182 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.182 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.182 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.441 { 00:17:59.441 "cntlid": 43, 00:17:59.441 "qid": 0, 00:17:59.441 "state": "enabled", 00:17:59.441 "thread": "nvmf_tgt_poll_group_000", 00:17:59.441 "listen_address": { 00:17:59.441 "trtype": "TCP", 00:17:59.441 "adrfam": "IPv4", 00:17:59.441 "traddr": "10.0.0.2", 00:17:59.441 "trsvcid": "4420" 00:17:59.441 }, 00:17:59.441 "peer_address": { 00:17:59.441 "trtype": "TCP", 00:17:59.441 "adrfam": "IPv4", 00:17:59.441 "traddr": "10.0.0.1", 00:17:59.441 "trsvcid": "54442" 00:17:59.441 }, 00:17:59.441 "auth": { 00:17:59.441 "state": "completed", 00:17:59.441 "digest": "sha256", 00:17:59.441 "dhgroup": "ffdhe8192" 00:17:59.441 } 00:17:59.441 } 00:17:59.441 ]' 00:17:59.441 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.720 20:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.983 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:00.548 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.806 20:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.806 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.740 00:18:01.740 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.740 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.740 20:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.740 { 00:18:01.740 "cntlid": 45, 00:18:01.740 "qid": 0, 00:18:01.740 "state": "enabled", 00:18:01.740 "thread": "nvmf_tgt_poll_group_000", 00:18:01.740 "listen_address": { 00:18:01.740 "trtype": "TCP", 00:18:01.740 "adrfam": "IPv4", 00:18:01.740 "traddr": "10.0.0.2", 00:18:01.740 "trsvcid": "4420" 00:18:01.740 }, 00:18:01.740 "peer_address": { 00:18:01.740 "trtype": "TCP", 00:18:01.740 "adrfam": "IPv4", 00:18:01.740 "traddr": "10.0.0.1", 00:18:01.740 "trsvcid": "54464" 00:18:01.740 }, 00:18:01.740 "auth": { 00:18:01.740 "state": "completed", 00:18:01.740 "digest": "sha256", 00:18:01.740 "dhgroup": "ffdhe8192" 00:18:01.740 } 00:18:01.740 } 00:18:01.740 ]' 00:18:01.740 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.000 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.300 20:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.919 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:03.176 20:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.109 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.109 { 00:18:04.109 "cntlid": 47, 00:18:04.109 "qid": 0, 00:18:04.109 "state": "enabled", 00:18:04.109 "thread": "nvmf_tgt_poll_group_000", 00:18:04.109 "listen_address": { 00:18:04.109 "trtype": "TCP", 00:18:04.109 "adrfam": "IPv4", 00:18:04.109 "traddr": "10.0.0.2", 00:18:04.109 "trsvcid": "4420" 00:18:04.109 }, 00:18:04.109 "peer_address": { 00:18:04.109 "trtype": "TCP", 00:18:04.109 "adrfam": "IPv4", 00:18:04.109 "traddr": "10.0.0.1", 00:18:04.109 "trsvcid": "54480" 00:18:04.109 }, 00:18:04.109 "auth": { 00:18:04.109 "state": "completed", 00:18:04.109 "digest": "sha256", 00:18:04.109 "dhgroup": "ffdhe8192" 00:18:04.109 } 00:18:04.109 } 00:18:04.109 ]' 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.109 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.366 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.366 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.366 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.366 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.366 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.623 20:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.559 20:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.816 00:18:05.816 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.816 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.816 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.074 { 00:18:06.074 "cntlid": 49, 00:18:06.074 "qid": 0, 00:18:06.074 "state": "enabled", 00:18:06.074 "thread": "nvmf_tgt_poll_group_000", 00:18:06.074 "listen_address": { 00:18:06.074 "trtype": "TCP", 00:18:06.074 "adrfam": "IPv4", 00:18:06.074 "traddr": "10.0.0.2", 00:18:06.074 "trsvcid": "4420" 00:18:06.074 }, 00:18:06.074 "peer_address": { 00:18:06.074 "trtype": "TCP", 00:18:06.074 "adrfam": "IPv4", 00:18:06.074 "traddr": "10.0.0.1", 00:18:06.074 "trsvcid": "54510" 00:18:06.074 }, 00:18:06.074 "auth": { 00:18:06.074 "state": "completed", 00:18:06.074 "digest": "sha384", 00:18:06.074 "dhgroup": "null" 00:18:06.074 } 00:18:06.074 } 00:18:06.074 ]' 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.074 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.332 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:06.332 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.332 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.332 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.332 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.589 20:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.523 20:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.781 00:18:07.781 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.781 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.781 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.040 { 00:18:08.040 "cntlid": 51, 00:18:08.040 "qid": 0, 00:18:08.040 "state": "enabled", 00:18:08.040 "thread": "nvmf_tgt_poll_group_000", 00:18:08.040 "listen_address": { 00:18:08.040 "trtype": "TCP", 00:18:08.040 "adrfam": "IPv4", 00:18:08.040 "traddr": "10.0.0.2", 00:18:08.040 "trsvcid": "4420" 00:18:08.040 }, 00:18:08.040 "peer_address": { 00:18:08.040 "trtype": "TCP", 00:18:08.040 "adrfam": "IPv4", 00:18:08.040 "traddr": "10.0.0.1", 00:18:08.040 "trsvcid": "36314" 00:18:08.040 }, 00:18:08.040 "auth": { 00:18:08.040 "state": "completed", 00:18:08.040 "digest": "sha384", 00:18:08.040 "dhgroup": "null" 00:18:08.040 } 00:18:08.040 } 00:18:08.040 ]' 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.040 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.298 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.298 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.298 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.298 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.298 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.556 20:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:09.490 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.491 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.750 20:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.009 00:18:10.009 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.009 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.009 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.268 { 00:18:10.268 "cntlid": 53, 00:18:10.268 "qid": 0, 00:18:10.268 "state": "enabled", 00:18:10.268 "thread": "nvmf_tgt_poll_group_000", 00:18:10.268 "listen_address": { 00:18:10.268 "trtype": "TCP", 00:18:10.268 "adrfam": "IPv4", 00:18:10.268 "traddr": "10.0.0.2", 00:18:10.268 "trsvcid": "4420" 00:18:10.268 }, 00:18:10.268 "peer_address": { 00:18:10.268 "trtype": "TCP", 00:18:10.268 "adrfam": "IPv4", 00:18:10.268 "traddr": "10.0.0.1", 00:18:10.268 "trsvcid": "36350" 00:18:10.268 }, 00:18:10.268 "auth": { 00:18:10.268 "state": "completed", 00:18:10.268 "digest": "sha384", 00:18:10.268 "dhgroup": "null" 00:18:10.268 } 00:18:10.268 } 00:18:10.268 ]' 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.268 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.527 20:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:11.904 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.904 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:11.904 20:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.904 20:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.905 20:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.905 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.905 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.905 20:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.905 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.164 00:18:12.164 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.164 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.164 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.423 { 00:18:12.423 "cntlid": 55, 00:18:12.423 "qid": 0, 00:18:12.423 "state": "enabled", 00:18:12.423 "thread": "nvmf_tgt_poll_group_000", 00:18:12.423 "listen_address": { 00:18:12.423 "trtype": "TCP", 00:18:12.423 "adrfam": "IPv4", 00:18:12.423 "traddr": "10.0.0.2", 00:18:12.423 "trsvcid": "4420" 00:18:12.423 }, 00:18:12.423 "peer_address": { 00:18:12.423 "trtype": "TCP", 00:18:12.423 "adrfam": "IPv4", 00:18:12.423 "traddr": "10.0.0.1", 00:18:12.423 "trsvcid": "36366" 00:18:12.423 }, 00:18:12.423 "auth": { 00:18:12.423 "state": "completed", 00:18:12.423 "digest": "sha384", 00:18:12.423 "dhgroup": "null" 00:18:12.423 } 00:18:12.423 } 00:18:12.423 ]' 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.423 20:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.682 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.618 20:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.877 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.136 00:18:14.136 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.136 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.136 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.396 { 00:18:14.396 "cntlid": 57, 00:18:14.396 "qid": 0, 00:18:14.396 "state": "enabled", 00:18:14.396 "thread": "nvmf_tgt_poll_group_000", 00:18:14.396 "listen_address": { 00:18:14.396 "trtype": "TCP", 00:18:14.396 "adrfam": "IPv4", 00:18:14.396 "traddr": "10.0.0.2", 00:18:14.396 "trsvcid": "4420" 00:18:14.396 }, 00:18:14.396 "peer_address": { 00:18:14.396 "trtype": "TCP", 00:18:14.396 "adrfam": "IPv4", 00:18:14.396 "traddr": "10.0.0.1", 00:18:14.396 "trsvcid": "36388" 00:18:14.396 }, 00:18:14.396 "auth": { 00:18:14.396 "state": "completed", 00:18:14.396 "digest": "sha384", 00:18:14.396 "dhgroup": "ffdhe2048" 00:18:14.396 } 00:18:14.396 } 00:18:14.396 ]' 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.396 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.655 20:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.589 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.848 20:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.107 00:18:16.107 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.107 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.107 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.367 { 00:18:16.367 "cntlid": 59, 00:18:16.367 "qid": 0, 00:18:16.367 "state": "enabled", 00:18:16.367 "thread": "nvmf_tgt_poll_group_000", 00:18:16.367 "listen_address": { 00:18:16.367 "trtype": "TCP", 00:18:16.367 "adrfam": "IPv4", 00:18:16.367 "traddr": "10.0.0.2", 00:18:16.367 "trsvcid": "4420" 00:18:16.367 }, 00:18:16.367 "peer_address": { 00:18:16.367 "trtype": "TCP", 00:18:16.367 "adrfam": "IPv4", 00:18:16.367 "traddr": "10.0.0.1", 00:18:16.367 "trsvcid": "36420" 00:18:16.367 }, 00:18:16.367 "auth": { 00:18:16.367 "state": "completed", 00:18:16.367 "digest": "sha384", 00:18:16.367 "dhgroup": "ffdhe2048" 00:18:16.367 } 00:18:16.367 } 00:18:16.367 ]' 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.367 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.626 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.626 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.626 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.626 20:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.589 20:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.848 00:18:17.848 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.848 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.848 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.108 { 00:18:18.108 "cntlid": 61, 00:18:18.108 "qid": 0, 00:18:18.108 "state": "enabled", 00:18:18.108 "thread": "nvmf_tgt_poll_group_000", 00:18:18.108 "listen_address": { 00:18:18.108 "trtype": "TCP", 00:18:18.108 "adrfam": "IPv4", 00:18:18.108 "traddr": "10.0.0.2", 00:18:18.108 "trsvcid": "4420" 00:18:18.108 }, 00:18:18.108 "peer_address": { 00:18:18.108 "trtype": "TCP", 00:18:18.108 "adrfam": "IPv4", 00:18:18.108 "traddr": "10.0.0.1", 00:18:18.108 "trsvcid": "41052" 00:18:18.108 }, 00:18:18.108 "auth": { 00:18:18.108 "state": "completed", 00:18:18.108 "digest": "sha384", 00:18:18.108 "dhgroup": "ffdhe2048" 00:18:18.108 } 00:18:18.108 } 00:18:18.108 ]' 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.108 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.367 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.367 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.367 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.367 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.367 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.625 20:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.193 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.453 20:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.022 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.022 { 00:18:20.022 "cntlid": 63, 00:18:20.022 "qid": 0, 00:18:20.022 "state": "enabled", 00:18:20.022 "thread": "nvmf_tgt_poll_group_000", 00:18:20.022 "listen_address": { 00:18:20.022 "trtype": "TCP", 00:18:20.022 "adrfam": "IPv4", 00:18:20.022 "traddr": "10.0.0.2", 00:18:20.022 "trsvcid": "4420" 00:18:20.022 }, 00:18:20.022 "peer_address": { 00:18:20.022 "trtype": "TCP", 00:18:20.022 "adrfam": "IPv4", 00:18:20.022 "traddr": "10.0.0.1", 00:18:20.022 "trsvcid": "41080" 00:18:20.022 }, 00:18:20.022 "auth": { 00:18:20.022 "state": "completed", 00:18:20.022 "digest": "sha384", 00:18:20.022 "dhgroup": "ffdhe2048" 00:18:20.022 } 00:18:20.022 } 00:18:20.022 ]' 00:18:20.022 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.281 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.539 20:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.476 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.735 20:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.735 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.735 20:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.735 00:18:21.994 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.995 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.995 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.995 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.995 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.995 20:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.995 20:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.254 { 00:18:22.254 "cntlid": 65, 00:18:22.254 "qid": 0, 00:18:22.254 "state": "enabled", 00:18:22.254 "thread": "nvmf_tgt_poll_group_000", 00:18:22.254 "listen_address": { 00:18:22.254 "trtype": "TCP", 00:18:22.254 "adrfam": "IPv4", 00:18:22.254 "traddr": "10.0.0.2", 00:18:22.254 "trsvcid": "4420" 00:18:22.254 }, 00:18:22.254 "peer_address": { 00:18:22.254 "trtype": "TCP", 00:18:22.254 "adrfam": "IPv4", 00:18:22.254 "traddr": "10.0.0.1", 00:18:22.254 "trsvcid": "41098" 00:18:22.254 }, 00:18:22.254 "auth": { 00:18:22.254 "state": "completed", 00:18:22.254 "digest": "sha384", 00:18:22.254 "dhgroup": "ffdhe3072" 00:18:22.254 } 00:18:22.254 } 00:18:22.254 ]' 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.254 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.513 20:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.450 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.451 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.710 20:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.969 00:18:23.969 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.969 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.969 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.227 { 00:18:24.227 "cntlid": 67, 00:18:24.227 "qid": 0, 00:18:24.227 "state": "enabled", 00:18:24.227 "thread": "nvmf_tgt_poll_group_000", 00:18:24.227 "listen_address": { 00:18:24.227 "trtype": "TCP", 00:18:24.227 "adrfam": "IPv4", 00:18:24.227 "traddr": "10.0.0.2", 00:18:24.227 "trsvcid": "4420" 00:18:24.227 }, 00:18:24.227 "peer_address": { 00:18:24.227 "trtype": "TCP", 00:18:24.227 "adrfam": "IPv4", 00:18:24.227 "traddr": "10.0.0.1", 00:18:24.227 "trsvcid": "41130" 00:18:24.227 }, 00:18:24.227 "auth": { 00:18:24.227 "state": "completed", 00:18:24.227 "digest": "sha384", 00:18:24.227 "dhgroup": "ffdhe3072" 00:18:24.227 } 00:18:24.227 } 00:18:24.227 ]' 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.227 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.486 20:16:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.427 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.827 20:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.828 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.828 20:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.085 00:18:26.085 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.085 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.085 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.344 { 00:18:26.344 "cntlid": 69, 00:18:26.344 "qid": 0, 00:18:26.344 "state": "enabled", 00:18:26.344 "thread": "nvmf_tgt_poll_group_000", 00:18:26.344 "listen_address": { 00:18:26.344 "trtype": "TCP", 00:18:26.344 "adrfam": "IPv4", 00:18:26.344 "traddr": "10.0.0.2", 00:18:26.344 "trsvcid": "4420" 00:18:26.344 }, 00:18:26.344 "peer_address": { 00:18:26.344 "trtype": "TCP", 00:18:26.344 "adrfam": "IPv4", 00:18:26.344 "traddr": "10.0.0.1", 00:18:26.344 "trsvcid": "41150" 00:18:26.344 }, 00:18:26.344 "auth": { 00:18:26.344 "state": "completed", 00:18:26.344 "digest": "sha384", 00:18:26.344 "dhgroup": "ffdhe3072" 00:18:26.344 } 00:18:26.344 } 00:18:26.344 ]' 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.344 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.602 20:16:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.536 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.794 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:27.795 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.795 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.795 20:16:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.795 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.795 20:16:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.054 00:18:28.054 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.054 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.054 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.311 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.311 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.311 20:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.311 20:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.311 20:16:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.311 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.311 { 00:18:28.311 "cntlid": 71, 00:18:28.311 "qid": 0, 00:18:28.311 "state": "enabled", 00:18:28.311 "thread": "nvmf_tgt_poll_group_000", 00:18:28.311 "listen_address": { 00:18:28.311 "trtype": "TCP", 00:18:28.311 "adrfam": "IPv4", 00:18:28.311 "traddr": "10.0.0.2", 00:18:28.311 "trsvcid": "4420" 00:18:28.311 }, 00:18:28.311 "peer_address": { 00:18:28.311 "trtype": "TCP", 00:18:28.311 "adrfam": "IPv4", 00:18:28.311 "traddr": "10.0.0.1", 00:18:28.311 "trsvcid": "46832" 00:18:28.311 }, 00:18:28.312 "auth": { 00:18:28.312 "state": "completed", 00:18:28.312 "digest": "sha384", 00:18:28.312 "dhgroup": "ffdhe3072" 00:18:28.312 } 00:18:28.312 } 00:18:28.312 ]' 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.312 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.569 20:16:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.506 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.765 20:16:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.024 00:18:30.024 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.024 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.024 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.283 { 00:18:30.283 "cntlid": 73, 00:18:30.283 "qid": 0, 00:18:30.283 "state": "enabled", 00:18:30.283 "thread": "nvmf_tgt_poll_group_000", 00:18:30.283 "listen_address": { 00:18:30.283 "trtype": "TCP", 00:18:30.283 "adrfam": "IPv4", 00:18:30.283 "traddr": "10.0.0.2", 00:18:30.283 "trsvcid": "4420" 00:18:30.283 }, 00:18:30.283 "peer_address": { 00:18:30.283 "trtype": "TCP", 00:18:30.283 "adrfam": "IPv4", 00:18:30.283 "traddr": "10.0.0.1", 00:18:30.283 "trsvcid": "46866" 00:18:30.283 }, 00:18:30.283 "auth": { 00:18:30.283 "state": "completed", 00:18:30.283 "digest": "sha384", 00:18:30.283 "dhgroup": "ffdhe4096" 00:18:30.283 } 00:18:30.283 } 00:18:30.283 ]' 00:18:30.283 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.542 20:16:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.801 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:31.738 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.738 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:31.739 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.739 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.739 20:16:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.739 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.739 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.739 20:16:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.739 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.306 00:18:32.306 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.306 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.306 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.565 { 00:18:32.565 "cntlid": 75, 00:18:32.565 "qid": 0, 00:18:32.565 "state": "enabled", 00:18:32.565 "thread": "nvmf_tgt_poll_group_000", 00:18:32.565 "listen_address": { 00:18:32.565 "trtype": "TCP", 00:18:32.565 "adrfam": "IPv4", 00:18:32.565 "traddr": "10.0.0.2", 00:18:32.565 "trsvcid": "4420" 00:18:32.565 }, 00:18:32.565 "peer_address": { 00:18:32.565 "trtype": "TCP", 00:18:32.565 "adrfam": "IPv4", 00:18:32.565 "traddr": "10.0.0.1", 00:18:32.565 "trsvcid": "46890" 00:18:32.565 }, 00:18:32.565 "auth": { 00:18:32.565 "state": "completed", 00:18:32.565 "digest": "sha384", 00:18:32.565 "dhgroup": "ffdhe4096" 00:18:32.565 } 00:18:32.565 } 00:18:32.565 ]' 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.565 20:16:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.824 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.761 20:16:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.020 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.279 00:18:34.279 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.279 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.279 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.538 { 00:18:34.538 "cntlid": 77, 00:18:34.538 "qid": 0, 00:18:34.538 "state": "enabled", 00:18:34.538 "thread": "nvmf_tgt_poll_group_000", 00:18:34.538 "listen_address": { 00:18:34.538 "trtype": "TCP", 00:18:34.538 "adrfam": "IPv4", 00:18:34.538 "traddr": "10.0.0.2", 00:18:34.538 "trsvcid": "4420" 00:18:34.538 }, 00:18:34.538 "peer_address": { 00:18:34.538 "trtype": "TCP", 00:18:34.538 "adrfam": "IPv4", 00:18:34.538 "traddr": "10.0.0.1", 00:18:34.538 "trsvcid": "46910" 00:18:34.538 }, 00:18:34.538 "auth": { 00:18:34.538 "state": "completed", 00:18:34.538 "digest": "sha384", 00:18:34.538 "dhgroup": "ffdhe4096" 00:18:34.538 } 00:18:34.538 } 00:18:34.538 ]' 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.538 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.797 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.797 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.797 20:16:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.054 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.619 20:17:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.877 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.443 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.443 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.702 { 00:18:36.702 "cntlid": 79, 00:18:36.702 "qid": 0, 00:18:36.702 "state": "enabled", 00:18:36.702 "thread": "nvmf_tgt_poll_group_000", 00:18:36.702 "listen_address": { 00:18:36.702 "trtype": "TCP", 00:18:36.702 "adrfam": "IPv4", 00:18:36.702 "traddr": "10.0.0.2", 00:18:36.702 "trsvcid": "4420" 00:18:36.702 }, 00:18:36.702 "peer_address": { 00:18:36.702 "trtype": "TCP", 00:18:36.702 "adrfam": "IPv4", 00:18:36.702 "traddr": "10.0.0.1", 00:18:36.702 "trsvcid": "46936" 00:18:36.702 }, 00:18:36.702 "auth": { 00:18:36.702 "state": "completed", 00:18:36.702 "digest": "sha384", 00:18:36.702 "dhgroup": "ffdhe4096" 00:18:36.702 } 00:18:36.702 } 00:18:36.702 ]' 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.702 20:17:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.960 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.893 20:17:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.893 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.151 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.151 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.151 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.409 00:18:38.409 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.409 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.409 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.668 { 00:18:38.668 "cntlid": 81, 00:18:38.668 "qid": 0, 00:18:38.668 "state": "enabled", 00:18:38.668 "thread": "nvmf_tgt_poll_group_000", 00:18:38.668 "listen_address": { 00:18:38.668 "trtype": "TCP", 00:18:38.668 "adrfam": "IPv4", 00:18:38.668 "traddr": "10.0.0.2", 00:18:38.668 "trsvcid": "4420" 00:18:38.668 }, 00:18:38.668 "peer_address": { 00:18:38.668 "trtype": "TCP", 00:18:38.668 "adrfam": "IPv4", 00:18:38.668 "traddr": "10.0.0.1", 00:18:38.668 "trsvcid": "36220" 00:18:38.668 }, 00:18:38.668 "auth": { 00:18:38.668 "state": "completed", 00:18:38.668 "digest": "sha384", 00:18:38.668 "dhgroup": "ffdhe6144" 00:18:38.668 } 00:18:38.668 } 00:18:38.668 ]' 00:18:38.668 20:17:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.926 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.186 20:17:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.121 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.122 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.122 20:17:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.122 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.122 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.688 00:18:40.688 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.689 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.689 20:17:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.947 { 00:18:40.947 "cntlid": 83, 00:18:40.947 "qid": 0, 00:18:40.947 "state": "enabled", 00:18:40.947 "thread": "nvmf_tgt_poll_group_000", 00:18:40.947 "listen_address": { 00:18:40.947 "trtype": "TCP", 00:18:40.947 "adrfam": "IPv4", 00:18:40.947 "traddr": "10.0.0.2", 00:18:40.947 "trsvcid": "4420" 00:18:40.947 }, 00:18:40.947 "peer_address": { 00:18:40.947 "trtype": "TCP", 00:18:40.947 "adrfam": "IPv4", 00:18:40.947 "traddr": "10.0.0.1", 00:18:40.947 "trsvcid": "36242" 00:18:40.947 }, 00:18:40.947 "auth": { 00:18:40.947 "state": "completed", 00:18:40.947 "digest": "sha384", 00:18:40.947 "dhgroup": "ffdhe6144" 00:18:40.947 } 00:18:40.947 } 00:18:40.947 ]' 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.947 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.206 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.206 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.206 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.465 20:17:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:42.031 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:42.290 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.548 20:17:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.805 00:18:42.805 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.805 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.805 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.064 { 00:18:43.064 "cntlid": 85, 00:18:43.064 "qid": 0, 00:18:43.064 "state": "enabled", 00:18:43.064 "thread": "nvmf_tgt_poll_group_000", 00:18:43.064 "listen_address": { 00:18:43.064 "trtype": "TCP", 00:18:43.064 "adrfam": "IPv4", 00:18:43.064 "traddr": "10.0.0.2", 00:18:43.064 "trsvcid": "4420" 00:18:43.064 }, 00:18:43.064 "peer_address": { 00:18:43.064 "trtype": "TCP", 00:18:43.064 "adrfam": "IPv4", 00:18:43.064 "traddr": "10.0.0.1", 00:18:43.064 "trsvcid": "36274" 00:18:43.064 }, 00:18:43.064 "auth": { 00:18:43.064 "state": "completed", 00:18:43.064 "digest": "sha384", 00:18:43.064 "dhgroup": "ffdhe6144" 00:18:43.064 } 00:18:43.064 } 00:18:43.064 ]' 00:18:43.064 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.322 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.579 20:17:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.514 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.773 20:17:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.773 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.773 20:17:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.031 00:18:45.031 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.031 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.031 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.290 { 00:18:45.290 "cntlid": 87, 00:18:45.290 "qid": 0, 00:18:45.290 "state": "enabled", 00:18:45.290 "thread": "nvmf_tgt_poll_group_000", 00:18:45.290 "listen_address": { 00:18:45.290 "trtype": "TCP", 00:18:45.290 "adrfam": "IPv4", 00:18:45.290 "traddr": "10.0.0.2", 00:18:45.290 "trsvcid": "4420" 00:18:45.290 }, 00:18:45.290 "peer_address": { 00:18:45.290 "trtype": "TCP", 00:18:45.290 "adrfam": "IPv4", 00:18:45.290 "traddr": "10.0.0.1", 00:18:45.290 "trsvcid": "36316" 00:18:45.290 }, 00:18:45.290 "auth": { 00:18:45.290 "state": "completed", 00:18:45.290 "digest": "sha384", 00:18:45.290 "dhgroup": "ffdhe6144" 00:18:45.290 } 00:18:45.290 } 00:18:45.290 ]' 00:18:45.290 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.549 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.808 20:17:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.743 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.744 20:17:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.744 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.680 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.680 { 00:18:47.680 "cntlid": 89, 00:18:47.680 "qid": 0, 00:18:47.680 "state": "enabled", 00:18:47.680 "thread": "nvmf_tgt_poll_group_000", 00:18:47.680 "listen_address": { 00:18:47.680 "trtype": "TCP", 00:18:47.680 "adrfam": "IPv4", 00:18:47.680 "traddr": "10.0.0.2", 00:18:47.680 "trsvcid": "4420" 00:18:47.680 }, 00:18:47.680 "peer_address": { 00:18:47.680 "trtype": "TCP", 00:18:47.680 "adrfam": "IPv4", 00:18:47.680 "traddr": "10.0.0.1", 00:18:47.680 "trsvcid": "36488" 00:18:47.680 }, 00:18:47.680 "auth": { 00:18:47.680 "state": "completed", 00:18:47.680 "digest": "sha384", 00:18:47.680 "dhgroup": "ffdhe8192" 00:18:47.680 } 00:18:47.680 } 00:18:47.680 ]' 00:18:47.680 20:17:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.680 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.680 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.939 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.939 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.939 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.939 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.939 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.200 20:17:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.137 20:17:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.075 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.075 { 00:18:50.075 "cntlid": 91, 00:18:50.075 "qid": 0, 00:18:50.075 "state": "enabled", 00:18:50.075 "thread": "nvmf_tgt_poll_group_000", 00:18:50.075 "listen_address": { 00:18:50.075 "trtype": "TCP", 00:18:50.075 "adrfam": "IPv4", 00:18:50.075 "traddr": "10.0.0.2", 00:18:50.075 "trsvcid": "4420" 00:18:50.075 }, 00:18:50.075 "peer_address": { 00:18:50.075 "trtype": "TCP", 00:18:50.075 "adrfam": "IPv4", 00:18:50.075 "traddr": "10.0.0.1", 00:18:50.075 "trsvcid": "36518" 00:18:50.075 }, 00:18:50.075 "auth": { 00:18:50.075 "state": "completed", 00:18:50.075 "digest": "sha384", 00:18:50.075 "dhgroup": "ffdhe8192" 00:18:50.075 } 00:18:50.075 } 00:18:50.075 ]' 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.075 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.334 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.334 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.334 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.334 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.334 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.593 20:17:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.530 20:17:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.468 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.468 { 00:18:52.468 "cntlid": 93, 00:18:52.468 "qid": 0, 00:18:52.468 "state": "enabled", 00:18:52.468 "thread": "nvmf_tgt_poll_group_000", 00:18:52.468 "listen_address": { 00:18:52.468 "trtype": "TCP", 00:18:52.468 "adrfam": "IPv4", 00:18:52.468 "traddr": "10.0.0.2", 00:18:52.468 "trsvcid": "4420" 00:18:52.468 }, 00:18:52.468 "peer_address": { 00:18:52.468 "trtype": "TCP", 00:18:52.468 "adrfam": "IPv4", 00:18:52.468 "traddr": "10.0.0.1", 00:18:52.468 "trsvcid": "36548" 00:18:52.468 }, 00:18:52.468 "auth": { 00:18:52.468 "state": "completed", 00:18:52.468 "digest": "sha384", 00:18:52.468 "dhgroup": "ffdhe8192" 00:18:52.468 } 00:18:52.468 } 00:18:52.468 ]' 00:18:52.468 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.727 20:17:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.986 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.921 20:17:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.921 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.857 00:18:54.857 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.857 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.857 20:17:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.857 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.857 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.857 20:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.857 20:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.115 { 00:18:55.115 "cntlid": 95, 00:18:55.115 "qid": 0, 00:18:55.115 "state": "enabled", 00:18:55.115 "thread": "nvmf_tgt_poll_group_000", 00:18:55.115 "listen_address": { 00:18:55.115 "trtype": "TCP", 00:18:55.115 "adrfam": "IPv4", 00:18:55.115 "traddr": "10.0.0.2", 00:18:55.115 "trsvcid": "4420" 00:18:55.115 }, 00:18:55.115 "peer_address": { 00:18:55.115 "trtype": "TCP", 00:18:55.115 "adrfam": "IPv4", 00:18:55.115 "traddr": "10.0.0.1", 00:18:55.115 "trsvcid": "36568" 00:18:55.115 }, 00:18:55.115 "auth": { 00:18:55.115 "state": "completed", 00:18:55.115 "digest": "sha384", 00:18:55.115 "dhgroup": "ffdhe8192" 00:18:55.115 } 00:18:55.115 } 00:18:55.115 ]' 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.115 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.375 20:17:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.310 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.569 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.828 00:18:56.828 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.828 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.828 20:17:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.087 { 00:18:57.087 "cntlid": 97, 00:18:57.087 "qid": 0, 00:18:57.087 "state": "enabled", 00:18:57.087 "thread": "nvmf_tgt_poll_group_000", 00:18:57.087 "listen_address": { 00:18:57.087 "trtype": "TCP", 00:18:57.087 "adrfam": "IPv4", 00:18:57.087 "traddr": "10.0.0.2", 00:18:57.087 "trsvcid": "4420" 00:18:57.087 }, 00:18:57.087 "peer_address": { 00:18:57.087 "trtype": "TCP", 00:18:57.087 "adrfam": "IPv4", 00:18:57.087 "traddr": "10.0.0.1", 00:18:57.087 "trsvcid": "58668" 00:18:57.087 }, 00:18:57.087 "auth": { 00:18:57.087 "state": "completed", 00:18:57.087 "digest": "sha512", 00:18:57.087 "dhgroup": "null" 00:18:57.087 } 00:18:57.087 } 00:18:57.087 ]' 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.087 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.346 20:17:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.282 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.541 20:17:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.800 00:18:58.800 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.800 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.800 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.059 { 00:18:59.059 "cntlid": 99, 00:18:59.059 "qid": 0, 00:18:59.059 "state": "enabled", 00:18:59.059 "thread": "nvmf_tgt_poll_group_000", 00:18:59.059 "listen_address": { 00:18:59.059 "trtype": "TCP", 00:18:59.059 "adrfam": "IPv4", 00:18:59.059 "traddr": "10.0.0.2", 00:18:59.059 "trsvcid": "4420" 00:18:59.059 }, 00:18:59.059 "peer_address": { 00:18:59.059 "trtype": "TCP", 00:18:59.059 "adrfam": "IPv4", 00:18:59.059 "traddr": "10.0.0.1", 00:18:59.059 "trsvcid": "58700" 00:18:59.059 }, 00:18:59.059 "auth": { 00:18:59.059 "state": "completed", 00:18:59.059 "digest": "sha512", 00:18:59.059 "dhgroup": "null" 00:18:59.059 } 00:18:59.059 } 00:18:59.059 ]' 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.059 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.318 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.318 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.318 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.318 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.318 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.577 20:17:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:19:00.512 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.512 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:00.512 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.513 20:17:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.771 00:19:00.771 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.771 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.771 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.030 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.030 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.030 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.030 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.030 20:17:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.030 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.030 { 00:19:01.030 "cntlid": 101, 00:19:01.030 "qid": 0, 00:19:01.030 "state": "enabled", 00:19:01.030 "thread": "nvmf_tgt_poll_group_000", 00:19:01.030 "listen_address": { 00:19:01.030 "trtype": "TCP", 00:19:01.030 "adrfam": "IPv4", 00:19:01.030 "traddr": "10.0.0.2", 00:19:01.030 "trsvcid": "4420" 00:19:01.030 }, 00:19:01.030 "peer_address": { 00:19:01.030 "trtype": "TCP", 00:19:01.030 "adrfam": "IPv4", 00:19:01.030 "traddr": "10.0.0.1", 00:19:01.030 "trsvcid": "58716" 00:19:01.030 }, 00:19:01.030 "auth": { 00:19:01.030 "state": "completed", 00:19:01.030 "digest": "sha512", 00:19:01.030 "dhgroup": "null" 00:19:01.030 } 00:19:01.030 } 00:19:01.030 ]' 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.289 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.548 20:17:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:19:02.180 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.180 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:02.180 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.180 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.450 20:17:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.708 00:19:02.966 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.966 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.966 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.224 { 00:19:03.224 "cntlid": 103, 00:19:03.224 "qid": 0, 00:19:03.224 "state": "enabled", 00:19:03.224 "thread": "nvmf_tgt_poll_group_000", 00:19:03.224 "listen_address": { 00:19:03.224 "trtype": "TCP", 00:19:03.224 "adrfam": "IPv4", 00:19:03.224 "traddr": "10.0.0.2", 00:19:03.224 "trsvcid": "4420" 00:19:03.224 }, 00:19:03.224 "peer_address": { 00:19:03.224 "trtype": "TCP", 00:19:03.224 "adrfam": "IPv4", 00:19:03.224 "traddr": "10.0.0.1", 00:19:03.224 "trsvcid": "58744" 00:19:03.224 }, 00:19:03.224 "auth": { 00:19:03.224 "state": "completed", 00:19:03.224 "digest": "sha512", 00:19:03.224 "dhgroup": "null" 00:19:03.224 } 00:19:03.224 } 00:19:03.224 ]' 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.224 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.483 20:17:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.420 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.680 20:17:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.247 00:19:05.247 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.247 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.247 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.507 { 00:19:05.507 "cntlid": 105, 00:19:05.507 "qid": 0, 00:19:05.507 "state": "enabled", 00:19:05.507 "thread": "nvmf_tgt_poll_group_000", 00:19:05.507 "listen_address": { 00:19:05.507 "trtype": "TCP", 00:19:05.507 "adrfam": "IPv4", 00:19:05.507 "traddr": "10.0.0.2", 00:19:05.507 "trsvcid": "4420" 00:19:05.507 }, 00:19:05.507 "peer_address": { 00:19:05.507 "trtype": "TCP", 00:19:05.507 "adrfam": "IPv4", 00:19:05.507 "traddr": "10.0.0.1", 00:19:05.507 "trsvcid": "58784" 00:19:05.507 }, 00:19:05.507 "auth": { 00:19:05.507 "state": "completed", 00:19:05.507 "digest": "sha512", 00:19:05.507 "dhgroup": "ffdhe2048" 00:19:05.507 } 00:19:05.507 } 00:19:05.507 ]' 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.507 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.766 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.766 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.766 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.766 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.766 20:17:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.025 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.592 20:17:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.851 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.852 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.852 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.421 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.421 { 00:19:07.421 "cntlid": 107, 00:19:07.421 "qid": 0, 00:19:07.421 "state": "enabled", 00:19:07.421 "thread": "nvmf_tgt_poll_group_000", 00:19:07.421 "listen_address": { 00:19:07.421 "trtype": "TCP", 00:19:07.421 "adrfam": "IPv4", 00:19:07.421 "traddr": "10.0.0.2", 00:19:07.421 "trsvcid": "4420" 00:19:07.421 }, 00:19:07.421 "peer_address": { 00:19:07.421 "trtype": "TCP", 00:19:07.421 "adrfam": "IPv4", 00:19:07.421 "traddr": "10.0.0.1", 00:19:07.421 "trsvcid": "59802" 00:19:07.421 }, 00:19:07.421 "auth": { 00:19:07.421 "state": "completed", 00:19:07.421 "digest": "sha512", 00:19:07.421 "dhgroup": "ffdhe2048" 00:19:07.421 } 00:19:07.421 } 00:19:07.421 ]' 00:19:07.421 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.680 20:17:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.939 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:08.505 20:17:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.764 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.329 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.329 { 00:19:09.329 "cntlid": 109, 00:19:09.329 "qid": 0, 00:19:09.329 "state": "enabled", 00:19:09.329 "thread": "nvmf_tgt_poll_group_000", 00:19:09.329 "listen_address": { 00:19:09.329 "trtype": "TCP", 00:19:09.329 "adrfam": "IPv4", 00:19:09.329 "traddr": "10.0.0.2", 00:19:09.329 "trsvcid": "4420" 00:19:09.329 }, 00:19:09.329 "peer_address": { 00:19:09.329 "trtype": "TCP", 00:19:09.329 "adrfam": "IPv4", 00:19:09.329 "traddr": "10.0.0.1", 00:19:09.329 "trsvcid": "59826" 00:19:09.329 }, 00:19:09.329 "auth": { 00:19:09.329 "state": "completed", 00:19:09.329 "digest": "sha512", 00:19:09.329 "dhgroup": "ffdhe2048" 00:19:09.329 } 00:19:09.329 } 00:19:09.329 ]' 00:19:09.329 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.589 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.848 20:17:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.783 20:17:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.041 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.605 00:19:11.605 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.605 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.605 20:17:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.864 { 00:19:11.864 "cntlid": 111, 00:19:11.864 "qid": 0, 00:19:11.864 "state": "enabled", 00:19:11.864 "thread": "nvmf_tgt_poll_group_000", 00:19:11.864 "listen_address": { 00:19:11.864 "trtype": "TCP", 00:19:11.864 "adrfam": "IPv4", 00:19:11.864 "traddr": "10.0.0.2", 00:19:11.864 "trsvcid": "4420" 00:19:11.864 }, 00:19:11.864 "peer_address": { 00:19:11.864 "trtype": "TCP", 00:19:11.864 "adrfam": "IPv4", 00:19:11.864 "traddr": "10.0.0.1", 00:19:11.864 "trsvcid": "59858" 00:19:11.864 }, 00:19:11.864 "auth": { 00:19:11.864 "state": "completed", 00:19:11.864 "digest": "sha512", 00:19:11.864 "dhgroup": "ffdhe2048" 00:19:11.864 } 00:19:11.864 } 00:19:11.864 ]' 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.864 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.122 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.122 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.122 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.122 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.122 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.380 20:17:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.316 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.575 00:19:13.833 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.833 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.833 20:17:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.833 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.833 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.833 20:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.833 20:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.833 20:17:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.833 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.833 { 00:19:13.833 "cntlid": 113, 00:19:13.833 "qid": 0, 00:19:13.833 "state": "enabled", 00:19:13.833 "thread": "nvmf_tgt_poll_group_000", 00:19:13.834 "listen_address": { 00:19:13.834 "trtype": "TCP", 00:19:13.834 "adrfam": "IPv4", 00:19:13.834 "traddr": "10.0.0.2", 00:19:13.834 "trsvcid": "4420" 00:19:13.834 }, 00:19:13.834 "peer_address": { 00:19:13.834 "trtype": "TCP", 00:19:13.834 "adrfam": "IPv4", 00:19:13.834 "traddr": "10.0.0.1", 00:19:13.834 "trsvcid": "59900" 00:19:13.834 }, 00:19:13.834 "auth": { 00:19:13.834 "state": "completed", 00:19:13.834 "digest": "sha512", 00:19:13.834 "dhgroup": "ffdhe3072" 00:19:13.834 } 00:19:13.834 } 00:19:13.834 ]' 00:19:13.834 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.834 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.834 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.092 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.092 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.092 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.092 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.092 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.350 20:17:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.285 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.543 20:17:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.108 00:19:16.108 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.108 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.108 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.366 { 00:19:16.366 "cntlid": 115, 00:19:16.366 "qid": 0, 00:19:16.366 "state": "enabled", 00:19:16.366 "thread": "nvmf_tgt_poll_group_000", 00:19:16.366 "listen_address": { 00:19:16.366 "trtype": "TCP", 00:19:16.366 "adrfam": "IPv4", 00:19:16.366 "traddr": "10.0.0.2", 00:19:16.366 "trsvcid": "4420" 00:19:16.366 }, 00:19:16.366 "peer_address": { 00:19:16.366 "trtype": "TCP", 00:19:16.366 "adrfam": "IPv4", 00:19:16.366 "traddr": "10.0.0.1", 00:19:16.366 "trsvcid": "59922" 00:19:16.366 }, 00:19:16.366 "auth": { 00:19:16.366 "state": "completed", 00:19:16.366 "digest": "sha512", 00:19:16.366 "dhgroup": "ffdhe3072" 00:19:16.366 } 00:19:16.366 } 00:19:16.366 ]' 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.366 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.623 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.623 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.623 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.623 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.623 20:17:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.880 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.812 20:17:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.812 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.813 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.813 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.813 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.813 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.813 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.377 00:19:18.377 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.377 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.377 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.377 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.378 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.378 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.378 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.378 20:17:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.378 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.378 { 00:19:18.378 "cntlid": 117, 00:19:18.378 "qid": 0, 00:19:18.378 "state": "enabled", 00:19:18.378 "thread": "nvmf_tgt_poll_group_000", 00:19:18.378 "listen_address": { 00:19:18.378 "trtype": "TCP", 00:19:18.378 "adrfam": "IPv4", 00:19:18.378 "traddr": "10.0.0.2", 00:19:18.378 "trsvcid": "4420" 00:19:18.378 }, 00:19:18.378 "peer_address": { 00:19:18.378 "trtype": "TCP", 00:19:18.378 "adrfam": "IPv4", 00:19:18.378 "traddr": "10.0.0.1", 00:19:18.378 "trsvcid": "34798" 00:19:18.378 }, 00:19:18.378 "auth": { 00:19:18.378 "state": "completed", 00:19:18.378 "digest": "sha512", 00:19:18.378 "dhgroup": "ffdhe3072" 00:19:18.378 } 00:19:18.378 } 00:19:18.378 ]' 00:19:18.378 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.657 20:17:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.914 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.852 20:17:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.852 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.111 00:19:20.111 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.111 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.111 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.370 { 00:19:20.370 "cntlid": 119, 00:19:20.370 "qid": 0, 00:19:20.370 "state": "enabled", 00:19:20.370 "thread": "nvmf_tgt_poll_group_000", 00:19:20.370 "listen_address": { 00:19:20.370 "trtype": "TCP", 00:19:20.370 "adrfam": "IPv4", 00:19:20.370 "traddr": "10.0.0.2", 00:19:20.370 "trsvcid": "4420" 00:19:20.370 }, 00:19:20.370 "peer_address": { 00:19:20.370 "trtype": "TCP", 00:19:20.370 "adrfam": "IPv4", 00:19:20.370 "traddr": "10.0.0.1", 00:19:20.370 "trsvcid": "34828" 00:19:20.370 }, 00:19:20.370 "auth": { 00:19:20.370 "state": "completed", 00:19:20.370 "digest": "sha512", 00:19:20.370 "dhgroup": "ffdhe3072" 00:19:20.370 } 00:19:20.370 } 00:19:20.370 ]' 00:19:20.370 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.629 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.629 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.629 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.629 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.630 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.630 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.630 20:17:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.888 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:21.825 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.825 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:21.825 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.826 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.826 20:17:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.826 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.826 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.826 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.826 20:17:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.085 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.653 00:19:22.653 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.653 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.653 20:17:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.912 { 00:19:22.912 "cntlid": 121, 00:19:22.912 "qid": 0, 00:19:22.912 "state": "enabled", 00:19:22.912 "thread": "nvmf_tgt_poll_group_000", 00:19:22.912 "listen_address": { 00:19:22.912 "trtype": "TCP", 00:19:22.912 "adrfam": "IPv4", 00:19:22.912 "traddr": "10.0.0.2", 00:19:22.912 "trsvcid": "4420" 00:19:22.912 }, 00:19:22.912 "peer_address": { 00:19:22.912 "trtype": "TCP", 00:19:22.912 "adrfam": "IPv4", 00:19:22.912 "traddr": "10.0.0.1", 00:19:22.912 "trsvcid": "34850" 00:19:22.912 }, 00:19:22.912 "auth": { 00:19:22.912 "state": "completed", 00:19:22.912 "digest": "sha512", 00:19:22.912 "dhgroup": "ffdhe4096" 00:19:22.912 } 00:19:22.912 } 00:19:22.912 ]' 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.912 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.171 20:17:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.109 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.678 20:17:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.246 00:19:25.246 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.246 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.246 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.506 { 00:19:25.506 "cntlid": 123, 00:19:25.506 "qid": 0, 00:19:25.506 "state": "enabled", 00:19:25.506 "thread": "nvmf_tgt_poll_group_000", 00:19:25.506 "listen_address": { 00:19:25.506 "trtype": "TCP", 00:19:25.506 "adrfam": "IPv4", 00:19:25.506 "traddr": "10.0.0.2", 00:19:25.506 "trsvcid": "4420" 00:19:25.506 }, 00:19:25.506 "peer_address": { 00:19:25.506 "trtype": "TCP", 00:19:25.506 "adrfam": "IPv4", 00:19:25.506 "traddr": "10.0.0.1", 00:19:25.506 "trsvcid": "34876" 00:19:25.506 }, 00:19:25.506 "auth": { 00:19:25.506 "state": "completed", 00:19:25.506 "digest": "sha512", 00:19:25.506 "dhgroup": "ffdhe4096" 00:19:25.506 } 00:19:25.506 } 00:19:25.506 ]' 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.506 20:17:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.765 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.700 20:17:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.959 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.218 00:19:27.218 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.218 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.218 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.477 { 00:19:27.477 "cntlid": 125, 00:19:27.477 "qid": 0, 00:19:27.477 "state": "enabled", 00:19:27.477 "thread": "nvmf_tgt_poll_group_000", 00:19:27.477 "listen_address": { 00:19:27.477 "trtype": "TCP", 00:19:27.477 "adrfam": "IPv4", 00:19:27.477 "traddr": "10.0.0.2", 00:19:27.477 "trsvcid": "4420" 00:19:27.477 }, 00:19:27.477 "peer_address": { 00:19:27.477 "trtype": "TCP", 00:19:27.477 "adrfam": "IPv4", 00:19:27.477 "traddr": "10.0.0.1", 00:19:27.477 "trsvcid": "39508" 00:19:27.477 }, 00:19:27.477 "auth": { 00:19:27.477 "state": "completed", 00:19:27.477 "digest": "sha512", 00:19:27.477 "dhgroup": "ffdhe4096" 00:19:27.477 } 00:19:27.477 } 00:19:27.477 ]' 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.477 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.740 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.740 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.740 20:17:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.999 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.564 20:17:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.132 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.766 00:19:29.766 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.766 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.766 20:17:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.025 { 00:19:30.025 "cntlid": 127, 00:19:30.025 "qid": 0, 00:19:30.025 "state": "enabled", 00:19:30.025 "thread": "nvmf_tgt_poll_group_000", 00:19:30.025 "listen_address": { 00:19:30.025 "trtype": "TCP", 00:19:30.025 "adrfam": "IPv4", 00:19:30.025 "traddr": "10.0.0.2", 00:19:30.025 "trsvcid": "4420" 00:19:30.025 }, 00:19:30.025 "peer_address": { 00:19:30.025 "trtype": "TCP", 00:19:30.025 "adrfam": "IPv4", 00:19:30.025 "traddr": "10.0.0.1", 00:19:30.025 "trsvcid": "39536" 00:19:30.025 }, 00:19:30.025 "auth": { 00:19:30.025 "state": "completed", 00:19:30.025 "digest": "sha512", 00:19:30.025 "dhgroup": "ffdhe4096" 00:19:30.025 } 00:19:30.025 } 00:19:30.025 ]' 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.025 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.283 20:17:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:31.217 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.218 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.785 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:31.785 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.785 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.785 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.785 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.785 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.786 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.786 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.786 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.786 20:17:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.786 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.786 20:17:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.044 00:19:32.044 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.044 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.044 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.303 { 00:19:32.303 "cntlid": 129, 00:19:32.303 "qid": 0, 00:19:32.303 "state": "enabled", 00:19:32.303 "thread": "nvmf_tgt_poll_group_000", 00:19:32.303 "listen_address": { 00:19:32.303 "trtype": "TCP", 00:19:32.303 "adrfam": "IPv4", 00:19:32.303 "traddr": "10.0.0.2", 00:19:32.303 "trsvcid": "4420" 00:19:32.303 }, 00:19:32.303 "peer_address": { 00:19:32.303 "trtype": "TCP", 00:19:32.303 "adrfam": "IPv4", 00:19:32.303 "traddr": "10.0.0.1", 00:19:32.303 "trsvcid": "39576" 00:19:32.303 }, 00:19:32.303 "auth": { 00:19:32.303 "state": "completed", 00:19:32.303 "digest": "sha512", 00:19:32.303 "dhgroup": "ffdhe6144" 00:19:32.303 } 00:19:32.303 } 00:19:32.303 ]' 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.303 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.561 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.561 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.561 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.561 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.561 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.819 20:17:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.753 20:17:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.753 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.319 00:19:34.319 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.319 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.319 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.577 { 00:19:34.577 "cntlid": 131, 00:19:34.577 "qid": 0, 00:19:34.577 "state": "enabled", 00:19:34.577 "thread": "nvmf_tgt_poll_group_000", 00:19:34.577 "listen_address": { 00:19:34.577 "trtype": "TCP", 00:19:34.577 "adrfam": "IPv4", 00:19:34.577 "traddr": "10.0.0.2", 00:19:34.577 "trsvcid": "4420" 00:19:34.577 }, 00:19:34.577 "peer_address": { 00:19:34.577 "trtype": "TCP", 00:19:34.577 "adrfam": "IPv4", 00:19:34.577 "traddr": "10.0.0.1", 00:19:34.577 "trsvcid": "39614" 00:19:34.577 }, 00:19:34.577 "auth": { 00:19:34.577 "state": "completed", 00:19:34.577 "digest": "sha512", 00:19:34.577 "dhgroup": "ffdhe6144" 00:19:34.577 } 00:19:34.577 } 00:19:34.577 ]' 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.577 20:17:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.835 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.769 20:18:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.026 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.283 00:19:36.283 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.283 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.283 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.541 { 00:19:36.541 "cntlid": 133, 00:19:36.541 "qid": 0, 00:19:36.541 "state": "enabled", 00:19:36.541 "thread": "nvmf_tgt_poll_group_000", 00:19:36.541 "listen_address": { 00:19:36.541 "trtype": "TCP", 00:19:36.541 "adrfam": "IPv4", 00:19:36.541 "traddr": "10.0.0.2", 00:19:36.541 "trsvcid": "4420" 00:19:36.541 }, 00:19:36.541 "peer_address": { 00:19:36.541 "trtype": "TCP", 00:19:36.541 "adrfam": "IPv4", 00:19:36.541 "traddr": "10.0.0.1", 00:19:36.541 "trsvcid": "39642" 00:19:36.541 }, 00:19:36.541 "auth": { 00:19:36.541 "state": "completed", 00:19:36.541 "digest": "sha512", 00:19:36.541 "dhgroup": "ffdhe6144" 00:19:36.541 } 00:19:36.541 } 00:19:36.541 ]' 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.541 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.800 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.800 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.800 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.800 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.800 20:18:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.059 20:18:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.996 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.563 00:19:38.563 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.563 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.563 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.821 20:18:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.821 { 00:19:38.821 "cntlid": 135, 00:19:38.821 "qid": 0, 00:19:38.821 "state": "enabled", 00:19:38.821 "thread": "nvmf_tgt_poll_group_000", 00:19:38.821 "listen_address": { 00:19:38.821 "trtype": "TCP", 00:19:38.821 "adrfam": "IPv4", 00:19:38.821 "traddr": "10.0.0.2", 00:19:38.821 "trsvcid": "4420" 00:19:38.821 }, 00:19:38.821 "peer_address": { 00:19:38.821 "trtype": "TCP", 00:19:38.821 "adrfam": "IPv4", 00:19:38.821 "traddr": "10.0.0.1", 00:19:38.821 "trsvcid": "47022" 00:19:38.821 }, 00:19:38.821 "auth": { 00:19:38.821 "state": "completed", 00:19:38.821 "digest": "sha512", 00:19:38.821 "dhgroup": "ffdhe6144" 00:19:38.821 } 00:19:38.821 } 00:19:38.821 ]' 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.821 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.822 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.080 20:18:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.017 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.585 20:18:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.152 00:19:41.152 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.152 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.152 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.412 { 00:19:41.412 "cntlid": 137, 00:19:41.412 "qid": 0, 00:19:41.412 "state": "enabled", 00:19:41.412 "thread": "nvmf_tgt_poll_group_000", 00:19:41.412 "listen_address": { 00:19:41.412 "trtype": "TCP", 00:19:41.412 "adrfam": "IPv4", 00:19:41.412 "traddr": "10.0.0.2", 00:19:41.412 "trsvcid": "4420" 00:19:41.412 }, 00:19:41.412 "peer_address": { 00:19:41.412 "trtype": "TCP", 00:19:41.412 "adrfam": "IPv4", 00:19:41.412 "traddr": "10.0.0.1", 00:19:41.412 "trsvcid": "47040" 00:19:41.412 }, 00:19:41.412 "auth": { 00:19:41.412 "state": "completed", 00:19:41.412 "digest": "sha512", 00:19:41.412 "dhgroup": "ffdhe8192" 00:19:41.412 } 00:19:41.412 } 00:19:41.412 ]' 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.412 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.671 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.671 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.671 20:18:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.930 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:19:42.498 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.758 20:18:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.016 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.580 00:19:43.580 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.580 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.580 20:18:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.839 { 00:19:43.839 "cntlid": 139, 00:19:43.839 "qid": 0, 00:19:43.839 "state": "enabled", 00:19:43.839 "thread": "nvmf_tgt_poll_group_000", 00:19:43.839 "listen_address": { 00:19:43.839 "trtype": "TCP", 00:19:43.839 "adrfam": "IPv4", 00:19:43.839 "traddr": "10.0.0.2", 00:19:43.839 "trsvcid": "4420" 00:19:43.839 }, 00:19:43.839 "peer_address": { 00:19:43.839 "trtype": "TCP", 00:19:43.839 "adrfam": "IPv4", 00:19:43.839 "traddr": "10.0.0.1", 00:19:43.839 "trsvcid": "47070" 00:19:43.839 }, 00:19:43.839 "auth": { 00:19:43.839 "state": "completed", 00:19:43.839 "digest": "sha512", 00:19:43.839 "dhgroup": "ffdhe8192" 00:19:43.839 } 00:19:43.839 } 00:19:43.839 ]' 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.839 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.097 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.097 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.097 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.355 20:18:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YzY4MGZmNTM4YzgxNDZhNmRmMjU0MDg0ZmI2NGIyNWFEugdb: --dhchap-ctrl-secret DHHC-1:02:ZGM5OTUwMDI5MjQ3MDgwNmZkMmM1ODU0YTRlMjY3ZDUzZTMxOGI3MDlkOTI0Mzhla+EIEg==: 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.922 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.180 20:18:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.117 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.117 { 00:19:46.117 "cntlid": 141, 00:19:46.117 "qid": 0, 00:19:46.117 "state": "enabled", 00:19:46.117 "thread": "nvmf_tgt_poll_group_000", 00:19:46.117 "listen_address": { 00:19:46.117 "trtype": "TCP", 00:19:46.117 "adrfam": "IPv4", 00:19:46.117 "traddr": "10.0.0.2", 00:19:46.117 "trsvcid": "4420" 00:19:46.117 }, 00:19:46.117 "peer_address": { 00:19:46.117 "trtype": "TCP", 00:19:46.117 "adrfam": "IPv4", 00:19:46.117 "traddr": "10.0.0.1", 00:19:46.117 "trsvcid": "47110" 00:19:46.117 }, 00:19:46.117 "auth": { 00:19:46.117 "state": "completed", 00:19:46.117 "digest": "sha512", 00:19:46.117 "dhgroup": "ffdhe8192" 00:19:46.117 } 00:19:46.117 } 00:19:46.117 ]' 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.117 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.376 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.376 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.376 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.376 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.376 20:18:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.943 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZTIwNDkzY2E3N2ViNjdiYmQzODAzMDc2Mzk1ZjFmMzhjODMzMGQ1NjMzYWRhYTUy6e5vNA==: --dhchap-ctrl-secret DHHC-1:01:OWEyNDQ0NWZmZjYyN2FmYWJkNmExNDFjY2RhYmRhNDYXMKE8: 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.510 20:18:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.769 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.713 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.713 { 00:19:48.713 "cntlid": 143, 00:19:48.713 "qid": 0, 00:19:48.713 "state": "enabled", 00:19:48.713 "thread": "nvmf_tgt_poll_group_000", 00:19:48.713 "listen_address": { 00:19:48.713 "trtype": "TCP", 00:19:48.713 "adrfam": "IPv4", 00:19:48.713 "traddr": "10.0.0.2", 00:19:48.713 "trsvcid": "4420" 00:19:48.713 }, 00:19:48.713 "peer_address": { 00:19:48.713 "trtype": "TCP", 00:19:48.713 "adrfam": "IPv4", 00:19:48.713 "traddr": "10.0.0.1", 00:19:48.713 "trsvcid": "57516" 00:19:48.713 }, 00:19:48.713 "auth": { 00:19:48.713 "state": "completed", 00:19:48.713 "digest": "sha512", 00:19:48.713 "dhgroup": "ffdhe8192" 00:19:48.713 } 00:19:48.713 } 00:19:48.713 ]' 00:19:48.713 20:18:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.713 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.713 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.971 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.971 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.971 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.971 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.971 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.229 20:18:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.166 20:18:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.101 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.101 { 00:19:51.101 "cntlid": 145, 00:19:51.101 "qid": 0, 00:19:51.101 "state": "enabled", 00:19:51.101 "thread": "nvmf_tgt_poll_group_000", 00:19:51.101 "listen_address": { 00:19:51.101 "trtype": "TCP", 00:19:51.101 "adrfam": "IPv4", 00:19:51.101 "traddr": "10.0.0.2", 00:19:51.101 "trsvcid": "4420" 00:19:51.101 }, 00:19:51.101 "peer_address": { 00:19:51.101 "trtype": "TCP", 00:19:51.101 "adrfam": "IPv4", 00:19:51.101 "traddr": "10.0.0.1", 00:19:51.101 "trsvcid": "57550" 00:19:51.101 }, 00:19:51.101 "auth": { 00:19:51.101 "state": "completed", 00:19:51.101 "digest": "sha512", 00:19:51.101 "dhgroup": "ffdhe8192" 00:19:51.101 } 00:19:51.101 } 00:19:51.101 ]' 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.101 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.359 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.359 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.359 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.359 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.359 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.359 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.617 20:18:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:OWM0OGM5NzRhYThjYmQxOTMyMWQ5MmFhZTM4NzdlYWM2ZDUwNzhhMDQ5YzEwMGI0NxiuJA==: --dhchap-ctrl-secret DHHC-1:03:NWVhOThiNTBjZDIxMTIwZGQ5NGQ2NWY3MzcyYmE4YWRhYjNjNTkwODY5N2ZmNjkwYjVkMzVmMGFjNzkzYWMyYjtQE7M=: 00:19:52.592 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:52.593 20:18:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.159 request: 00:19:53.159 { 00:19:53.159 "name": "nvme0", 00:19:53.159 "trtype": "tcp", 00:19:53.159 "traddr": "10.0.0.2", 00:19:53.159 "adrfam": "ipv4", 00:19:53.159 "trsvcid": "4420", 00:19:53.159 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:53.159 "prchk_reftag": false, 00:19:53.159 "prchk_guard": false, 00:19:53.159 "hdgst": false, 00:19:53.159 "ddgst": false, 00:19:53.159 "dhchap_key": "key2", 00:19:53.159 "method": "bdev_nvme_attach_controller", 00:19:53.159 "req_id": 1 00:19:53.159 } 00:19:53.159 Got JSON-RPC error response 00:19:53.159 response: 00:19:53.159 { 00:19:53.159 "code": -5, 00:19:53.159 "message": "Input/output error" 00:19:53.159 } 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.159 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:53.726 request: 00:19:53.726 { 00:19:53.726 "name": "nvme0", 00:19:53.726 "trtype": "tcp", 00:19:53.726 "traddr": "10.0.0.2", 00:19:53.726 "adrfam": "ipv4", 00:19:53.726 "trsvcid": "4420", 00:19:53.726 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:53.726 "prchk_reftag": false, 00:19:53.726 "prchk_guard": false, 00:19:53.726 "hdgst": false, 00:19:53.726 "ddgst": false, 00:19:53.726 "dhchap_key": "key1", 00:19:53.726 "dhchap_ctrlr_key": "ckey2", 00:19:53.726 "method": "bdev_nvme_attach_controller", 00:19:53.726 "req_id": 1 00:19:53.726 } 00:19:53.726 Got JSON-RPC error response 00:19:53.726 response: 00:19:53.726 { 00:19:53.726 "code": -5, 00:19:53.726 "message": "Input/output error" 00:19:53.726 } 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.726 20:18:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.295 request: 00:19:54.295 { 00:19:54.295 "name": "nvme0", 00:19:54.295 "trtype": "tcp", 00:19:54.295 "traddr": "10.0.0.2", 00:19:54.295 "adrfam": "ipv4", 00:19:54.295 "trsvcid": "4420", 00:19:54.295 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:54.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:54.295 "prchk_reftag": false, 00:19:54.295 "prchk_guard": false, 00:19:54.295 "hdgst": false, 00:19:54.295 "ddgst": false, 00:19:54.295 "dhchap_key": "key1", 00:19:54.295 "dhchap_ctrlr_key": "ckey1", 00:19:54.295 "method": "bdev_nvme_attach_controller", 00:19:54.295 "req_id": 1 00:19:54.295 } 00:19:54.295 Got JSON-RPC error response 00:19:54.295 response: 00:19:54.295 { 00:19:54.295 "code": -5, 00:19:54.295 "message": "Input/output error" 00:19:54.295 } 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 35687 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 35687 ']' 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 35687 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 35687 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 35687' 00:19:54.295 killing process with pid 35687 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 35687 00:19:54.295 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 35687 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=66018 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 66018 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 66018 ']' 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.554 20:18:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 66018 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 66018 ']' 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.814 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.071 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.071 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:55.071 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:55.071 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.071 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.330 20:18:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.898 00:19:55.898 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.898 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.898 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.158 { 00:19:56.158 "cntlid": 1, 00:19:56.158 "qid": 0, 00:19:56.158 "state": "enabled", 00:19:56.158 "thread": "nvmf_tgt_poll_group_000", 00:19:56.158 "listen_address": { 00:19:56.158 "trtype": "TCP", 00:19:56.158 "adrfam": "IPv4", 00:19:56.158 "traddr": "10.0.0.2", 00:19:56.158 "trsvcid": "4420" 00:19:56.158 }, 00:19:56.158 "peer_address": { 00:19:56.158 "trtype": "TCP", 00:19:56.158 "adrfam": "IPv4", 00:19:56.158 "traddr": "10.0.0.1", 00:19:56.158 "trsvcid": "57614" 00:19:56.158 }, 00:19:56.158 "auth": { 00:19:56.158 "state": "completed", 00:19:56.158 "digest": "sha512", 00:19:56.158 "dhgroup": "ffdhe8192" 00:19:56.158 } 00:19:56.158 } 00:19:56.158 ]' 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.158 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.417 20:18:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODE4MTBiNzQ4YWYzOTMwMjExYjVlYWEwMGQ3NjYyMDI5MzY2YzQyNGRiYmVjYjU4ZDE5MDAyMTM4NTEyZTE5MiTP0Ak=: 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.358 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.618 request: 00:19:57.618 { 00:19:57.618 "name": "nvme0", 00:19:57.618 "trtype": "tcp", 00:19:57.618 "traddr": "10.0.0.2", 00:19:57.618 "adrfam": "ipv4", 00:19:57.618 "trsvcid": "4420", 00:19:57.618 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:57.618 "prchk_reftag": false, 00:19:57.618 "prchk_guard": false, 00:19:57.618 "hdgst": false, 00:19:57.618 "ddgst": false, 00:19:57.618 "dhchap_key": "key3", 00:19:57.618 "method": "bdev_nvme_attach_controller", 00:19:57.618 "req_id": 1 00:19:57.618 } 00:19:57.618 Got JSON-RPC error response 00:19:57.618 response: 00:19:57.618 { 00:19:57.618 "code": -5, 00:19:57.618 "message": "Input/output error" 00:19:57.618 } 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:57.618 20:18:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:57.877 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:57.878 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.878 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.140 request: 00:19:58.140 { 00:19:58.140 "name": "nvme0", 00:19:58.140 "trtype": "tcp", 00:19:58.140 "traddr": "10.0.0.2", 00:19:58.140 "adrfam": "ipv4", 00:19:58.140 "trsvcid": "4420", 00:19:58.140 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:58.140 "prchk_reftag": false, 00:19:58.140 "prchk_guard": false, 00:19:58.140 "hdgst": false, 00:19:58.140 "ddgst": false, 00:19:58.140 "dhchap_key": "key3", 00:19:58.140 "method": "bdev_nvme_attach_controller", 00:19:58.140 "req_id": 1 00:19:58.140 } 00:19:58.140 Got JSON-RPC error response 00:19:58.140 response: 00:19:58.140 { 00:19:58.140 "code": -5, 00:19:58.140 "message": "Input/output error" 00:19:58.140 } 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.140 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:58.399 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.400 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:58.400 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.400 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:58.400 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.400 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.400 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.658 request: 00:19:58.658 { 00:19:58.658 "name": "nvme0", 00:19:58.658 "trtype": "tcp", 00:19:58.658 "traddr": "10.0.0.2", 00:19:58.658 "adrfam": "ipv4", 00:19:58.658 "trsvcid": "4420", 00:19:58.658 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:58.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:19:58.658 "prchk_reftag": false, 00:19:58.658 "prchk_guard": false, 00:19:58.658 "hdgst": false, 00:19:58.658 "ddgst": false, 00:19:58.658 "dhchap_key": "key0", 00:19:58.658 "dhchap_ctrlr_key": "key1", 00:19:58.658 "method": "bdev_nvme_attach_controller", 00:19:58.658 "req_id": 1 00:19:58.658 } 00:19:58.658 Got JSON-RPC error response 00:19:58.658 response: 00:19:58.658 { 00:19:58.658 "code": -5, 00:19:58.658 "message": "Input/output error" 00:19:58.658 } 00:19:58.658 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:58.658 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.658 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.658 20:18:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.658 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.658 20:18:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.918 00:19:58.918 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:58.918 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:58.918 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.177 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.177 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.177 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 35733 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 35733 ']' 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 35733 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 35733 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 35733' 00:19:59.437 killing process with pid 35733 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 35733 00:19:59.437 20:18:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 35733 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.005 rmmod nvme_tcp 00:20:00.005 rmmod nvme_fabrics 00:20:00.005 rmmod nvme_keyring 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 66018 ']' 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 66018 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 66018 ']' 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 66018 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66018 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66018' 00:20:00.005 killing process with pid 66018 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 66018 00:20:00.005 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 66018 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.264 20:18:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.253 20:18:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.253 20:18:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Dpx /tmp/spdk.key-sha256.fpV /tmp/spdk.key-sha384.M5y /tmp/spdk.key-sha512.7Ip /tmp/spdk.key-sha512.vKm /tmp/spdk.key-sha384.MHh /tmp/spdk.key-sha256.Gqu '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:02.253 00:20:02.253 real 2m55.913s 00:20:02.253 user 6m52.422s 00:20:02.253 sys 0m23.600s 00:20:02.253 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.253 20:18:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.253 ************************************ 00:20:02.253 END TEST nvmf_auth_target 00:20:02.253 ************************************ 00:20:02.253 20:18:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:02.253 20:18:27 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:02.253 20:18:27 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:02.253 20:18:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:02.253 20:18:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.253 20:18:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.253 ************************************ 00:20:02.253 START TEST nvmf_bdevio_no_huge 00:20:02.253 ************************************ 00:20:02.253 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:02.513 * Looking for test storage... 00:20:02.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.513 20:18:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:07.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:07.789 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:07.789 Found net devices under 0000:af:00.0: cvl_0_0 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:07.789 Found net devices under 0000:af:00.1: cvl_0_1 00:20:07.789 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.790 20:18:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.790 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.790 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.790 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.790 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:20:08.054 00:20:08.054 --- 10.0.0.2 ping statistics --- 00:20:08.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.054 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:20:08.054 00:20:08.054 --- 10.0.0.1 ping statistics --- 00:20:08.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.054 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=70594 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 70594 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 70594 ']' 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.054 20:18:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:08.054 [2024-07-15 20:18:33.308564] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:08.054 [2024-07-15 20:18:33.308629] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:08.054 [2024-07-15 20:18:33.393490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.319 [2024-07-15 20:18:33.511555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.319 [2024-07-15 20:18:33.511595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.319 [2024-07-15 20:18:33.511607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.319 [2024-07-15 20:18:33.511616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.319 [2024-07-15 20:18:33.511623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.319 [2024-07-15 20:18:33.512109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:08.319 [2024-07-15 20:18:33.512201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:08.319 [2024-07-15 20:18:33.512313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:08.319 [2024-07-15 20:18:33.512314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.256 [2024-07-15 20:18:34.306364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.256 Malloc0 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.256 [2024-07-15 20:18:34.354900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.256 { 00:20:09.256 "params": { 00:20:09.256 "name": "Nvme$subsystem", 00:20:09.256 "trtype": "$TEST_TRANSPORT", 00:20:09.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.256 "adrfam": "ipv4", 00:20:09.256 "trsvcid": "$NVMF_PORT", 00:20:09.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.256 "hdgst": ${hdgst:-false}, 00:20:09.256 "ddgst": ${ddgst:-false} 00:20:09.256 }, 00:20:09.256 "method": "bdev_nvme_attach_controller" 00:20:09.256 } 00:20:09.256 EOF 00:20:09.256 )") 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:09.256 20:18:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.256 "params": { 00:20:09.256 "name": "Nvme1", 00:20:09.256 "trtype": "tcp", 00:20:09.256 "traddr": "10.0.0.2", 00:20:09.256 "adrfam": "ipv4", 00:20:09.256 "trsvcid": "4420", 00:20:09.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.256 "hdgst": false, 00:20:09.256 "ddgst": false 00:20:09.256 }, 00:20:09.256 "method": "bdev_nvme_attach_controller" 00:20:09.256 }' 00:20:09.256 [2024-07-15 20:18:34.406914] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:09.256 [2024-07-15 20:18:34.406976] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70862 ] 00:20:09.256 [2024-07-15 20:18:34.496158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:09.516 [2024-07-15 20:18:34.613855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.516 [2024-07-15 20:18:34.613956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.516 [2024-07-15 20:18:34.613957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.774 I/O targets: 00:20:09.774 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:09.774 00:20:09.774 00:20:09.774 CUnit - A unit testing framework for C - Version 2.1-3 00:20:09.774 http://cunit.sourceforge.net/ 00:20:09.774 00:20:09.774 00:20:09.774 Suite: bdevio tests on: Nvme1n1 00:20:09.774 Test: blockdev write read block ...passed 00:20:09.774 Test: blockdev write zeroes read block ...passed 00:20:09.774 Test: blockdev write zeroes read no split ...passed 00:20:09.774 Test: blockdev write zeroes read split ...passed 00:20:09.774 Test: blockdev write zeroes read split partial ...passed 00:20:09.774 Test: blockdev reset ...[2024-07-15 20:18:35.059128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.774 [2024-07-15 20:18:35.059204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e55520 (9): Bad file descriptor 00:20:09.774 [2024-07-15 20:18:35.113167] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:09.774 passed 00:20:09.774 Test: blockdev write read 8 blocks ...passed 00:20:09.774 Test: blockdev write read size > 128k ...passed 00:20:09.774 Test: blockdev write read invalid size ...passed 00:20:10.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.033 Test: blockdev write read max offset ...passed 00:20:10.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.033 Test: blockdev writev readv 8 blocks ...passed 00:20:10.033 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.033 Test: blockdev writev readv block ...passed 00:20:10.033 Test: blockdev writev readv size > 128k ...passed 00:20:10.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.033 Test: blockdev comparev and writev ...[2024-07-15 20:18:35.371412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.371438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.371450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.371457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.371952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.371962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.371973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.371979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.372486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.372496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.372506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.372513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.373000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.373009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:10.033 [2024-07-15 20:18:35.373019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.033 [2024-07-15 20:18:35.373026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:10.292 passed 00:20:10.292 Test: blockdev nvme passthru rw ...passed 00:20:10.292 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:18:35.455689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.292 [2024-07-15 20:18:35.455703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:10.292 [2024-07-15 20:18:35.455898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.292 [2024-07-15 20:18:35.455907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:10.292 [2024-07-15 20:18:35.456102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.292 [2024-07-15 20:18:35.456110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:10.292 [2024-07-15 20:18:35.456315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.292 [2024-07-15 20:18:35.456324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:10.292 passed 00:20:10.292 Test: blockdev nvme admin passthru ...passed 00:20:10.292 Test: blockdev copy ...passed 00:20:10.292 00:20:10.292 Run Summary: Type Total Ran Passed Failed Inactive 00:20:10.292 suites 1 1 n/a 0 0 00:20:10.292 tests 23 23 23 0 0 00:20:10.292 asserts 152 152 152 0 n/a 00:20:10.292 00:20:10.292 Elapsed time = 1.158 seconds 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.550 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.550 rmmod nvme_tcp 00:20:10.809 rmmod nvme_fabrics 00:20:10.809 rmmod nvme_keyring 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 70594 ']' 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 70594 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 70594 ']' 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 70594 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70594 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70594' 00:20:10.809 killing process with pid 70594 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 70594 00:20:10.809 20:18:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 70594 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.067 20:18:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.598 20:18:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.598 00:20:13.598 real 0m10.890s 00:20:13.598 user 0m15.361s 00:20:13.598 sys 0m5.326s 00:20:13.598 20:18:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.598 20:18:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:13.598 ************************************ 00:20:13.598 END TEST nvmf_bdevio_no_huge 00:20:13.598 ************************************ 00:20:13.598 20:18:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:13.598 20:18:38 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:13.598 20:18:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:13.598 20:18:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.598 20:18:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.598 ************************************ 00:20:13.598 START TEST nvmf_tls 00:20:13.598 ************************************ 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:13.598 * Looking for test storage... 00:20:13.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.598 20:18:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.899 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:18.900 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:18.900 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:18.900 Found net devices under 0000:af:00.0: cvl_0_0 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:18.900 Found net devices under 0000:af:00.1: cvl_0_1 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.900 20:18:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.900 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:19.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:19.159 00:20:19.159 --- 10.0.0.2 ping statistics --- 00:20:19.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.159 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:20:19.159 00:20:19.159 --- 10.0.0.1 ping statistics --- 00:20:19.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.159 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74843 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74843 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74843 ']' 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.159 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.159 [2024-07-15 20:18:44.357832] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:19.159 [2024-07-15 20:18:44.357887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.159 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.159 [2024-07-15 20:18:44.437418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.418 [2024-07-15 20:18:44.527632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.418 [2024-07-15 20:18:44.527675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.418 [2024-07-15 20:18:44.527685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.418 [2024-07-15 20:18:44.527694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.418 [2024-07-15 20:18:44.527702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.418 [2024-07-15 20:18:44.527726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:19.418 20:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:19.676 true 00:20:19.676 20:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.676 20:18:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:19.933 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:19.933 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:19.933 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:20.190 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.190 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:20.448 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:20.448 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:20.448 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:20.706 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.706 20:18:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:20.982 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:21.239 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:21.239 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.497 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:21.497 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:21.497 20:18:46 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:21.756 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.756 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:22.013 20:18:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.dda7ceQAAd 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.t8uIs7MbYp 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.dda7ceQAAd 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.t8uIs7MbYp 00:20:22.271 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:22.529 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:22.788 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.dda7ceQAAd 00:20:22.788 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dda7ceQAAd 00:20:22.788 20:18:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.368 [2024-07-15 20:18:48.412439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.368 20:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.368 20:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.626 [2024-07-15 20:18:48.901714] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.626 [2024-07-15 20:18:48.901930] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.626 20:18:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.885 malloc0 00:20:23.885 20:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:24.143 20:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dda7ceQAAd 00:20:24.400 [2024-07-15 20:18:49.608854] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:24.400 20:18:49 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.dda7ceQAAd 00:20:24.400 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.604 Initializing NVMe Controllers 00:20:36.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.604 Initialization complete. Launching workers. 00:20:36.604 ======================================================== 00:20:36.604 Latency(us) 00:20:36.604 Device Information : IOPS MiB/s Average min max 00:20:36.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10890.46 42.54 5877.79 1239.00 6413.21 00:20:36.604 ======================================================== 00:20:36.604 Total : 10890.46 42.54 5877.79 1239.00 6413.21 00:20:36.604 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dda7ceQAAd 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dda7ceQAAd' 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77532 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77532 /var/tmp/bdevperf.sock 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77532 ']' 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.604 20:18:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.604 [2024-07-15 20:18:59.790149] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:36.604 [2024-07-15 20:18:59.790216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77532 ] 00:20:36.604 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.604 [2024-07-15 20:18:59.847280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.604 [2024-07-15 20:18:59.914208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.604 20:19:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.604 20:19:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:36.604 20:19:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dda7ceQAAd 00:20:36.604 [2024-07-15 20:19:00.237114] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.604 [2024-07-15 20:19:00.237177] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.604 TLSTESTn1 00:20:36.604 20:19:00 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:36.604 Running I/O for 10 seconds... 00:20:46.595 00:20:46.595 Latency(us) 00:20:46.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.595 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:46.595 Verification LBA range: start 0x0 length 0x2000 00:20:46.595 TLSTESTn1 : 10.02 3641.05 14.22 0.00 0.00 35106.59 5213.09 45756.04 00:20:46.595 =================================================================================================================== 00:20:46.595 Total : 3641.05 14.22 0.00 0.00 35106.59 5213.09 45756.04 00:20:46.595 0 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 77532 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77532 ']' 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77532 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77532 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77532' 00:20:46.595 killing process with pid 77532 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77532 00:20:46.595 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.595 00:20:46.595 Latency(us) 00:20:46.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.595 =================================================================================================================== 00:20:46.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.595 [2024-07-15 20:19:10.573384] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77532 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8uIs7MbYp 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8uIs7MbYp 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8uIs7MbYp 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.t8uIs7MbYp' 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79432 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79432 /var/tmp/bdevperf.sock 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 79432 ']' 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.595 20:19:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.595 [2024-07-15 20:19:10.803355] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:46.595 [2024-07-15 20:19:10.803415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79432 ] 00:20:46.595 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.596 [2024-07-15 20:19:10.861393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.596 [2024-07-15 20:19:10.934259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8uIs7MbYp 00:20:46.596 [2024-07-15 20:19:11.250825] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.596 [2024-07-15 20:19:11.250890] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.596 [2024-07-15 20:19:11.255340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.596 [2024-07-15 20:19:11.255953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536af0 (107): Transport endpoint is not connected 00:20:46.596 [2024-07-15 20:19:11.256942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536af0 (9): Bad file descriptor 00:20:46.596 [2024-07-15 20:19:11.257943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:46.596 [2024-07-15 20:19:11.257952] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:46.596 [2024-07-15 20:19:11.257960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:46.596 request: 00:20:46.596 { 00:20:46.596 "name": "TLSTEST", 00:20:46.596 "trtype": "tcp", 00:20:46.596 "traddr": "10.0.0.2", 00:20:46.596 "adrfam": "ipv4", 00:20:46.596 "trsvcid": "4420", 00:20:46.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.596 "prchk_reftag": false, 00:20:46.596 "prchk_guard": false, 00:20:46.596 "hdgst": false, 00:20:46.596 "ddgst": false, 00:20:46.596 "psk": "/tmp/tmp.t8uIs7MbYp", 00:20:46.596 "method": "bdev_nvme_attach_controller", 00:20:46.596 "req_id": 1 00:20:46.596 } 00:20:46.596 Got JSON-RPC error response 00:20:46.596 response: 00:20:46.596 { 00:20:46.596 "code": -5, 00:20:46.596 "message": "Input/output error" 00:20:46.596 } 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79432 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 79432 ']' 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 79432 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79432 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79432' 00:20:46.596 killing process with pid 79432 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 79432 00:20:46.596 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.596 00:20:46.596 Latency(us) 00:20:46.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.596 =================================================================================================================== 00:20:46.596 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.596 [2024-07-15 20:19:11.325500] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 79432 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dda7ceQAAd 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dda7ceQAAd 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dda7ceQAAd 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dda7ceQAAd' 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79633 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79633 /var/tmp/bdevperf.sock 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 79633 ']' 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.596 [2024-07-15 20:19:11.545685] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:46.596 [2024-07-15 20:19:11.545747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79633 ] 00:20:46.596 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.596 [2024-07-15 20:19:11.603754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.596 [2024-07-15 20:19:11.666458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:46.596 20:19:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.dda7ceQAAd 00:20:46.856 [2024-07-15 20:19:11.977240] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.856 [2024-07-15 20:19:11.977322] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.856 [2024-07-15 20:19:11.984513] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.856 [2024-07-15 20:19:11.984541] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.856 [2024-07-15 20:19:11.984570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.856 [2024-07-15 20:19:11.985429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620af0 (107): Transport endpoint is not connected 00:20:46.856 [2024-07-15 20:19:11.986419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1620af0 (9): Bad file descriptor 00:20:46.856 [2024-07-15 20:19:11.987421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:46.856 [2024-07-15 20:19:11.987430] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:46.856 [2024-07-15 20:19:11.987439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:46.856 request: 00:20:46.856 { 00:20:46.856 "name": "TLSTEST", 00:20:46.856 "trtype": "tcp", 00:20:46.856 "traddr": "10.0.0.2", 00:20:46.856 "adrfam": "ipv4", 00:20:46.856 "trsvcid": "4420", 00:20:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.856 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.856 "prchk_reftag": false, 00:20:46.856 "prchk_guard": false, 00:20:46.856 "hdgst": false, 00:20:46.856 "ddgst": false, 00:20:46.856 "psk": "/tmp/tmp.dda7ceQAAd", 00:20:46.856 "method": "bdev_nvme_attach_controller", 00:20:46.856 "req_id": 1 00:20:46.856 } 00:20:46.856 Got JSON-RPC error response 00:20:46.856 response: 00:20:46.856 { 00:20:46.856 "code": -5, 00:20:46.856 "message": "Input/output error" 00:20:46.856 } 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79633 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 79633 ']' 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 79633 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79633 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79633' 00:20:46.856 killing process with pid 79633 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 79633 00:20:46.856 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.856 00:20:46.856 Latency(us) 00:20:46.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.856 =================================================================================================================== 00:20:46.856 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.856 [2024-07-15 20:19:12.053572] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:46.856 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 79633 00:20:47.114 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:47.114 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:47.114 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:47.114 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:47.114 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dda7ceQAAd 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dda7ceQAAd 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dda7ceQAAd 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dda7ceQAAd' 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79690 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79690 /var/tmp/bdevperf.sock 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 79690 ']' 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.115 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.115 [2024-07-15 20:19:12.261392] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:47.115 [2024-07-15 20:19:12.261453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79690 ] 00:20:47.115 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.115 [2024-07-15 20:19:12.319772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.115 [2024-07-15 20:19:12.385728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dda7ceQAAd 00:20:47.373 [2024-07-15 20:19:12.692544] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.373 [2024-07-15 20:19:12.692616] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.373 [2024-07-15 20:19:12.696977] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:47.373 [2024-07-15 20:19:12.697004] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:47.373 [2024-07-15 20:19:12.697034] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:47.373 [2024-07-15 20:19:12.697683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d7af0 (107): Transport endpoint is not connected 00:20:47.373 [2024-07-15 20:19:12.698672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d7af0 (9): Bad file descriptor 00:20:47.373 [2024-07-15 20:19:12.699674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.373 [2024-07-15 20:19:12.699683] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:47.373 [2024-07-15 20:19:12.699692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.373 request: 00:20:47.373 { 00:20:47.373 "name": "TLSTEST", 00:20:47.373 "trtype": "tcp", 00:20:47.373 "traddr": "10.0.0.2", 00:20:47.373 "adrfam": "ipv4", 00:20:47.373 "trsvcid": "4420", 00:20:47.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.373 "prchk_reftag": false, 00:20:47.373 "prchk_guard": false, 00:20:47.373 "hdgst": false, 00:20:47.373 "ddgst": false, 00:20:47.373 "psk": "/tmp/tmp.dda7ceQAAd", 00:20:47.373 "method": "bdev_nvme_attach_controller", 00:20:47.373 "req_id": 1 00:20:47.373 } 00:20:47.373 Got JSON-RPC error response 00:20:47.373 response: 00:20:47.373 { 00:20:47.373 "code": -5, 00:20:47.373 "message": "Input/output error" 00:20:47.373 } 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79690 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 79690 ']' 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 79690 00:20:47.373 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79690 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79690' 00:20:47.632 killing process with pid 79690 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 79690 00:20:47.632 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.632 00:20:47.632 Latency(us) 00:20:47.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.632 =================================================================================================================== 00:20:47.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.632 [2024-07-15 20:19:12.765060] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 79690 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=79910 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 79910 /var/tmp/bdevperf.sock 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 79910 ']' 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.632 20:19:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.891 [2024-07-15 20:19:12.983970] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:47.891 [2024-07-15 20:19:12.984029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79910 ] 00:20:47.891 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.891 [2024-07-15 20:19:13.041390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.891 [2024-07-15 20:19:13.104234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.891 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.891 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.891 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:48.150 [2024-07-15 20:19:13.409466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.150 [2024-07-15 20:19:13.411349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fe030 (9): Bad file descriptor 00:20:48.150 [2024-07-15 20:19:13.412349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:48.150 [2024-07-15 20:19:13.412358] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:48.150 [2024-07-15 20:19:13.412366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:48.150 request: 00:20:48.150 { 00:20:48.150 "name": "TLSTEST", 00:20:48.150 "trtype": "tcp", 00:20:48.150 "traddr": "10.0.0.2", 00:20:48.150 "adrfam": "ipv4", 00:20:48.150 "trsvcid": "4420", 00:20:48.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.150 "prchk_reftag": false, 00:20:48.150 "prchk_guard": false, 00:20:48.150 "hdgst": false, 00:20:48.150 "ddgst": false, 00:20:48.150 "method": "bdev_nvme_attach_controller", 00:20:48.150 "req_id": 1 00:20:48.150 } 00:20:48.150 Got JSON-RPC error response 00:20:48.150 response: 00:20:48.150 { 00:20:48.150 "code": -5, 00:20:48.150 "message": "Input/output error" 00:20:48.150 } 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 79910 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 79910 ']' 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 79910 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79910 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79910' 00:20:48.150 killing process with pid 79910 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 79910 00:20:48.150 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.150 00:20:48.150 Latency(us) 00:20:48.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.150 =================================================================================================================== 00:20:48.150 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.150 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 79910 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 74843 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74843 ']' 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74843 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74843 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74843' 00:20:48.409 killing process with pid 74843 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74843 00:20:48.409 [2024-07-15 20:19:13.696833] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.409 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74843 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.NTnb15z3Uz 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.NTnb15z3Uz 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=80115 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 80115 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 80115 ']' 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.668 20:19:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.668 [2024-07-15 20:19:14.014025] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:48.668 [2024-07-15 20:19:14.014083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.927 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.927 [2024-07-15 20:19:14.089270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.927 [2024-07-15 20:19:14.178284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.927 [2024-07-15 20:19:14.178326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.927 [2024-07-15 20:19:14.178337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.927 [2024-07-15 20:19:14.178346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.927 [2024-07-15 20:19:14.178354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.927 [2024-07-15 20:19:14.178375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.NTnb15z3Uz 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NTnb15z3Uz 00:20:49.186 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.444 [2024-07-15 20:19:14.541621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.444 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:49.703 20:19:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:49.703 [2024-07-15 20:19:15.018895] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.703 [2024-07-15 20:19:15.019095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.703 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.962 malloc0 00:20:49.962 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.223 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:20:50.482 [2024-07-15 20:19:15.750246] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTnb15z3Uz 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NTnb15z3Uz' 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=80473 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 80473 /var/tmp/bdevperf.sock 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 80473 ']' 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.482 20:19:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.482 [2024-07-15 20:19:15.809992] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:20:50.482 [2024-07-15 20:19:15.810049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80473 ] 00:20:50.747 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.747 [2024-07-15 20:19:15.867173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.747 [2024-07-15 20:19:15.935481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.747 20:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.747 20:19:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:50.747 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:20:51.005 [2024-07-15 20:19:16.258276] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.005 [2024-07-15 20:19:16.258342] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:51.005 TLSTESTn1 00:20:51.005 20:19:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:51.264 Running I/O for 10 seconds... 00:21:01.297 00:21:01.297 Latency(us) 00:21:01.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.297 Verification LBA range: start 0x0 length 0x2000 00:21:01.297 TLSTESTn1 : 10.02 4527.75 17.69 0.00 0.00 28228.05 6255.71 56241.80 00:21:01.297 =================================================================================================================== 00:21:01.297 Total : 4527.75 17.69 0.00 0.00 28228.05 6255.71 56241.80 00:21:01.297 0 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 80473 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 80473 ']' 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 80473 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80473 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80473' 00:21:01.297 killing process with pid 80473 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 80473 00:21:01.297 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.297 00:21:01.297 Latency(us) 00:21:01.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.297 =================================================================================================================== 00:21:01.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.297 [2024-07-15 20:19:26.576908] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.297 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 80473 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.NTnb15z3Uz 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTnb15z3Uz 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTnb15z3Uz 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NTnb15z3Uz 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NTnb15z3Uz' 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82319 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82319 /var/tmp/bdevperf.sock 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 82319 ']' 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.556 20:19:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.556 [2024-07-15 20:19:26.807164] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:01.556 [2024-07-15 20:19:26.807228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82319 ] 00:21:01.556 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.556 [2024-07-15 20:19:26.864955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.814 [2024-07-15 20:19:26.928008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.814 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.814 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.814 20:19:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:21:02.072 [2024-07-15 20:19:27.238792] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.072 [2024-07-15 20:19:27.238847] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:02.072 [2024-07-15 20:19:27.238854] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.NTnb15z3Uz 00:21:02.072 request: 00:21:02.072 { 00:21:02.072 "name": "TLSTEST", 00:21:02.072 "trtype": "tcp", 00:21:02.072 "traddr": "10.0.0.2", 00:21:02.072 "adrfam": "ipv4", 00:21:02.072 "trsvcid": "4420", 00:21:02.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.072 "prchk_reftag": false, 00:21:02.072 "prchk_guard": false, 00:21:02.072 "hdgst": false, 00:21:02.072 "ddgst": false, 00:21:02.072 "psk": "/tmp/tmp.NTnb15z3Uz", 00:21:02.072 "method": "bdev_nvme_attach_controller", 00:21:02.072 "req_id": 1 00:21:02.072 } 00:21:02.072 Got JSON-RPC error response 00:21:02.072 response: 00:21:02.072 { 00:21:02.072 "code": -1, 00:21:02.072 "message": "Operation not permitted" 00:21:02.072 } 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 82319 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 82319 ']' 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 82319 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82319 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82319' 00:21:02.072 killing process with pid 82319 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 82319 00:21:02.072 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.072 00:21:02.072 Latency(us) 00:21:02.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.072 =================================================================================================================== 00:21:02.072 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.072 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 82319 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 80115 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 80115 ']' 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 80115 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80115 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80115' 00:21:02.336 killing process with pid 80115 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 80115 00:21:02.336 [2024-07-15 20:19:27.524158] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:02.336 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 80115 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82586 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82586 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 82586 ']' 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.595 20:19:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.595 [2024-07-15 20:19:27.796313] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:02.595 [2024-07-15 20:19:27.796372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.595 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.595 [2024-07-15 20:19:27.872766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.852 [2024-07-15 20:19:27.962528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.852 [2024-07-15 20:19:27.962569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.852 [2024-07-15 20:19:27.962579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.852 [2024-07-15 20:19:27.962588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.853 [2024-07-15 20:19:27.962595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.853 [2024-07-15 20:19:27.962622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.NTnb15z3Uz 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NTnb15z3Uz 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.NTnb15z3Uz 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NTnb15z3Uz 00:21:02.853 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.109 [2024-07-15 20:19:28.249745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.109 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:03.366 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:03.623 [2024-07-15 20:19:28.731030] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.623 [2024-07-15 20:19:28.731230] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.623 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:03.880 malloc0 00:21:03.880 20:19:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:21:04.138 [2024-07-15 20:19:29.462214] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:04.138 [2024-07-15 20:19:29.462247] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:04.138 [2024-07-15 20:19:29.462286] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:04.138 request: 00:21:04.138 { 00:21:04.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.138 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.138 "psk": "/tmp/tmp.NTnb15z3Uz", 00:21:04.138 "method": "nvmf_subsystem_add_host", 00:21:04.138 "req_id": 1 00:21:04.138 } 00:21:04.138 Got JSON-RPC error response 00:21:04.138 response: 00:21:04.138 { 00:21:04.138 "code": -32603, 00:21:04.138 "message": "Internal error" 00:21:04.138 } 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 82586 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 82586 ']' 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 82586 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.138 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82586 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82586' 00:21:04.396 killing process with pid 82586 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 82586 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 82586 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.NTnb15z3Uz 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82879 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82879 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 82879 ']' 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.396 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.654 20:19:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.654 [2024-07-15 20:19:29.786946] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:04.654 [2024-07-15 20:19:29.787005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.655 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.655 [2024-07-15 20:19:29.863770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.655 [2024-07-15 20:19:29.950170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.655 [2024-07-15 20:19:29.950214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.655 [2024-07-15 20:19:29.950225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.655 [2024-07-15 20:19:29.950233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.655 [2024-07-15 20:19:29.950242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.655 [2024-07-15 20:19:29.950273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.NTnb15z3Uz 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NTnb15z3Uz 00:21:04.913 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:05.172 [2024-07-15 20:19:30.310755] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.172 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:05.431 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:05.691 [2024-07-15 20:19:30.788018] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.691 [2024-07-15 20:19:30.788219] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.691 20:19:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:05.954 malloc0 00:21:05.954 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:05.954 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:21:06.213 [2024-07-15 20:19:31.511198] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=83176 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 83176 /var/tmp/bdevperf.sock 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83176 ']' 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.213 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.471 [2024-07-15 20:19:31.576039] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:06.471 [2024-07-15 20:19:31.576099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83176 ] 00:21:06.471 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.471 [2024-07-15 20:19:31.634272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.471 [2024-07-15 20:19:31.703698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.471 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.471 20:19:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.471 20:19:31 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:21:06.730 [2024-07-15 20:19:32.018881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.730 [2024-07-15 20:19:32.018947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.988 TLSTESTn1 00:21:06.988 20:19:32 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:07.248 20:19:32 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:07.248 "subsystems": [ 00:21:07.248 { 00:21:07.248 "subsystem": "keyring", 00:21:07.248 "config": [] 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "subsystem": "iobuf", 00:21:07.248 "config": [ 00:21:07.248 { 00:21:07.248 "method": "iobuf_set_options", 00:21:07.248 "params": { 00:21:07.248 "small_pool_count": 8192, 00:21:07.248 "large_pool_count": 1024, 00:21:07.248 "small_bufsize": 8192, 00:21:07.248 "large_bufsize": 135168 00:21:07.248 } 00:21:07.248 } 00:21:07.248 ] 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "subsystem": "sock", 00:21:07.248 "config": [ 00:21:07.248 { 00:21:07.248 "method": "sock_set_default_impl", 00:21:07.248 "params": { 00:21:07.248 "impl_name": "posix" 00:21:07.248 } 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "method": "sock_impl_set_options", 00:21:07.248 "params": { 00:21:07.248 "impl_name": "ssl", 00:21:07.248 "recv_buf_size": 4096, 00:21:07.248 "send_buf_size": 4096, 00:21:07.248 "enable_recv_pipe": true, 00:21:07.248 "enable_quickack": false, 00:21:07.248 "enable_placement_id": 0, 00:21:07.248 "enable_zerocopy_send_server": true, 00:21:07.248 "enable_zerocopy_send_client": false, 00:21:07.248 "zerocopy_threshold": 0, 00:21:07.248 "tls_version": 0, 00:21:07.248 "enable_ktls": false 00:21:07.248 } 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "method": "sock_impl_set_options", 00:21:07.248 "params": { 00:21:07.248 "impl_name": "posix", 00:21:07.248 "recv_buf_size": 2097152, 00:21:07.248 "send_buf_size": 2097152, 00:21:07.248 "enable_recv_pipe": true, 00:21:07.248 "enable_quickack": false, 00:21:07.248 "enable_placement_id": 0, 00:21:07.248 "enable_zerocopy_send_server": true, 00:21:07.248 "enable_zerocopy_send_client": false, 00:21:07.248 "zerocopy_threshold": 0, 00:21:07.248 "tls_version": 0, 00:21:07.248 "enable_ktls": false 00:21:07.248 } 00:21:07.248 } 00:21:07.248 ] 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "subsystem": "vmd", 00:21:07.248 "config": [] 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "subsystem": "accel", 00:21:07.248 "config": [ 00:21:07.248 { 00:21:07.248 "method": "accel_set_options", 00:21:07.248 "params": { 00:21:07.248 "small_cache_size": 128, 00:21:07.248 "large_cache_size": 16, 00:21:07.248 "task_count": 2048, 00:21:07.248 "sequence_count": 2048, 00:21:07.248 "buf_count": 2048 00:21:07.248 } 00:21:07.248 } 00:21:07.248 ] 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "subsystem": "bdev", 00:21:07.248 "config": [ 00:21:07.248 { 00:21:07.248 "method": "bdev_set_options", 00:21:07.248 "params": { 00:21:07.248 "bdev_io_pool_size": 65535, 00:21:07.248 "bdev_io_cache_size": 256, 00:21:07.248 "bdev_auto_examine": true, 00:21:07.248 "iobuf_small_cache_size": 128, 00:21:07.248 "iobuf_large_cache_size": 16 00:21:07.248 } 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "method": "bdev_raid_set_options", 00:21:07.248 "params": { 00:21:07.248 "process_window_size_kb": 1024 00:21:07.248 } 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "method": "bdev_iscsi_set_options", 00:21:07.248 "params": { 00:21:07.248 "timeout_sec": 30 00:21:07.248 } 00:21:07.248 }, 00:21:07.248 { 00:21:07.248 "method": "bdev_nvme_set_options", 00:21:07.248 "params": { 00:21:07.248 "action_on_timeout": "none", 00:21:07.248 "timeout_us": 0, 00:21:07.248 "timeout_admin_us": 0, 00:21:07.248 "keep_alive_timeout_ms": 10000, 00:21:07.248 "arbitration_burst": 0, 00:21:07.248 "low_priority_weight": 0, 00:21:07.248 "medium_priority_weight": 0, 00:21:07.248 "high_priority_weight": 0, 00:21:07.248 "nvme_adminq_poll_period_us": 10000, 00:21:07.248 "nvme_ioq_poll_period_us": 0, 00:21:07.248 "io_queue_requests": 0, 00:21:07.248 "delay_cmd_submit": true, 00:21:07.248 "transport_retry_count": 4, 00:21:07.248 "bdev_retry_count": 3, 00:21:07.248 "transport_ack_timeout": 0, 00:21:07.248 "ctrlr_loss_timeout_sec": 0, 00:21:07.248 "reconnect_delay_sec": 0, 00:21:07.248 "fast_io_fail_timeout_sec": 0, 00:21:07.248 "disable_auto_failback": false, 00:21:07.248 "generate_uuids": false, 00:21:07.248 "transport_tos": 0, 00:21:07.248 "nvme_error_stat": false, 00:21:07.248 "rdma_srq_size": 0, 00:21:07.248 "io_path_stat": false, 00:21:07.248 "allow_accel_sequence": false, 00:21:07.248 "rdma_max_cq_size": 0, 00:21:07.248 "rdma_cm_event_timeout_ms": 0, 00:21:07.248 "dhchap_digests": [ 00:21:07.248 "sha256", 00:21:07.248 "sha384", 00:21:07.248 "sha512" 00:21:07.248 ], 00:21:07.248 "dhchap_dhgroups": [ 00:21:07.248 "null", 00:21:07.248 "ffdhe2048", 00:21:07.248 "ffdhe3072", 00:21:07.248 "ffdhe4096", 00:21:07.248 "ffdhe6144", 00:21:07.248 "ffdhe8192" 00:21:07.248 ] 00:21:07.248 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "bdev_nvme_set_hotplug", 00:21:07.249 "params": { 00:21:07.249 "period_us": 100000, 00:21:07.249 "enable": false 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "bdev_malloc_create", 00:21:07.249 "params": { 00:21:07.249 "name": "malloc0", 00:21:07.249 "num_blocks": 8192, 00:21:07.249 "block_size": 4096, 00:21:07.249 "physical_block_size": 4096, 00:21:07.249 "uuid": "3ceaa36d-c18a-40f2-92e3-c11ecdeacb68", 00:21:07.249 "optimal_io_boundary": 0 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "bdev_wait_for_examine" 00:21:07.249 } 00:21:07.249 ] 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "subsystem": "nbd", 00:21:07.249 "config": [] 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "subsystem": "scheduler", 00:21:07.249 "config": [ 00:21:07.249 { 00:21:07.249 "method": "framework_set_scheduler", 00:21:07.249 "params": { 00:21:07.249 "name": "static" 00:21:07.249 } 00:21:07.249 } 00:21:07.249 ] 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "subsystem": "nvmf", 00:21:07.249 "config": [ 00:21:07.249 { 00:21:07.249 "method": "nvmf_set_config", 00:21:07.249 "params": { 00:21:07.249 "discovery_filter": "match_any", 00:21:07.249 "admin_cmd_passthru": { 00:21:07.249 "identify_ctrlr": false 00:21:07.249 } 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_set_max_subsystems", 00:21:07.249 "params": { 00:21:07.249 "max_subsystems": 1024 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_set_crdt", 00:21:07.249 "params": { 00:21:07.249 "crdt1": 0, 00:21:07.249 "crdt2": 0, 00:21:07.249 "crdt3": 0 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_create_transport", 00:21:07.249 "params": { 00:21:07.249 "trtype": "TCP", 00:21:07.249 "max_queue_depth": 128, 00:21:07.249 "max_io_qpairs_per_ctrlr": 127, 00:21:07.249 "in_capsule_data_size": 4096, 00:21:07.249 "max_io_size": 131072, 00:21:07.249 "io_unit_size": 131072, 00:21:07.249 "max_aq_depth": 128, 00:21:07.249 "num_shared_buffers": 511, 00:21:07.249 "buf_cache_size": 4294967295, 00:21:07.249 "dif_insert_or_strip": false, 00:21:07.249 "zcopy": false, 00:21:07.249 "c2h_success": false, 00:21:07.249 "sock_priority": 0, 00:21:07.249 "abort_timeout_sec": 1, 00:21:07.249 "ack_timeout": 0, 00:21:07.249 "data_wr_pool_size": 0 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_create_subsystem", 00:21:07.249 "params": { 00:21:07.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.249 "allow_any_host": false, 00:21:07.249 "serial_number": "SPDK00000000000001", 00:21:07.249 "model_number": "SPDK bdev Controller", 00:21:07.249 "max_namespaces": 10, 00:21:07.249 "min_cntlid": 1, 00:21:07.249 "max_cntlid": 65519, 00:21:07.249 "ana_reporting": false 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_subsystem_add_host", 00:21:07.249 "params": { 00:21:07.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.249 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.249 "psk": "/tmp/tmp.NTnb15z3Uz" 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_subsystem_add_ns", 00:21:07.249 "params": { 00:21:07.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.249 "namespace": { 00:21:07.249 "nsid": 1, 00:21:07.249 "bdev_name": "malloc0", 00:21:07.249 "nguid": "3CEAA36DC18A40F292E3C11ECDEACB68", 00:21:07.249 "uuid": "3ceaa36d-c18a-40f2-92e3-c11ecdeacb68", 00:21:07.249 "no_auto_visible": false 00:21:07.249 } 00:21:07.249 } 00:21:07.249 }, 00:21:07.249 { 00:21:07.249 "method": "nvmf_subsystem_add_listener", 00:21:07.249 "params": { 00:21:07.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.249 "listen_address": { 00:21:07.249 "trtype": "TCP", 00:21:07.249 "adrfam": "IPv4", 00:21:07.249 "traddr": "10.0.0.2", 00:21:07.249 "trsvcid": "4420" 00:21:07.249 }, 00:21:07.249 "secure_channel": true 00:21:07.249 } 00:21:07.249 } 00:21:07.249 ] 00:21:07.249 } 00:21:07.249 ] 00:21:07.249 }' 00:21:07.249 20:19:32 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:07.508 20:19:32 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:07.508 "subsystems": [ 00:21:07.508 { 00:21:07.508 "subsystem": "keyring", 00:21:07.508 "config": [] 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "subsystem": "iobuf", 00:21:07.508 "config": [ 00:21:07.508 { 00:21:07.508 "method": "iobuf_set_options", 00:21:07.508 "params": { 00:21:07.508 "small_pool_count": 8192, 00:21:07.508 "large_pool_count": 1024, 00:21:07.508 "small_bufsize": 8192, 00:21:07.508 "large_bufsize": 135168 00:21:07.508 } 00:21:07.508 } 00:21:07.508 ] 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "subsystem": "sock", 00:21:07.508 "config": [ 00:21:07.508 { 00:21:07.508 "method": "sock_set_default_impl", 00:21:07.508 "params": { 00:21:07.508 "impl_name": "posix" 00:21:07.508 } 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "method": "sock_impl_set_options", 00:21:07.508 "params": { 00:21:07.508 "impl_name": "ssl", 00:21:07.508 "recv_buf_size": 4096, 00:21:07.508 "send_buf_size": 4096, 00:21:07.508 "enable_recv_pipe": true, 00:21:07.508 "enable_quickack": false, 00:21:07.508 "enable_placement_id": 0, 00:21:07.508 "enable_zerocopy_send_server": true, 00:21:07.508 "enable_zerocopy_send_client": false, 00:21:07.508 "zerocopy_threshold": 0, 00:21:07.508 "tls_version": 0, 00:21:07.508 "enable_ktls": false 00:21:07.508 } 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "method": "sock_impl_set_options", 00:21:07.508 "params": { 00:21:07.508 "impl_name": "posix", 00:21:07.508 "recv_buf_size": 2097152, 00:21:07.508 "send_buf_size": 2097152, 00:21:07.508 "enable_recv_pipe": true, 00:21:07.508 "enable_quickack": false, 00:21:07.508 "enable_placement_id": 0, 00:21:07.508 "enable_zerocopy_send_server": true, 00:21:07.508 "enable_zerocopy_send_client": false, 00:21:07.508 "zerocopy_threshold": 0, 00:21:07.508 "tls_version": 0, 00:21:07.508 "enable_ktls": false 00:21:07.508 } 00:21:07.508 } 00:21:07.508 ] 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "subsystem": "vmd", 00:21:07.508 "config": [] 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "subsystem": "accel", 00:21:07.508 "config": [ 00:21:07.508 { 00:21:07.508 "method": "accel_set_options", 00:21:07.508 "params": { 00:21:07.508 "small_cache_size": 128, 00:21:07.508 "large_cache_size": 16, 00:21:07.508 "task_count": 2048, 00:21:07.508 "sequence_count": 2048, 00:21:07.508 "buf_count": 2048 00:21:07.508 } 00:21:07.508 } 00:21:07.508 ] 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "subsystem": "bdev", 00:21:07.508 "config": [ 00:21:07.508 { 00:21:07.508 "method": "bdev_set_options", 00:21:07.508 "params": { 00:21:07.508 "bdev_io_pool_size": 65535, 00:21:07.508 "bdev_io_cache_size": 256, 00:21:07.508 "bdev_auto_examine": true, 00:21:07.508 "iobuf_small_cache_size": 128, 00:21:07.508 "iobuf_large_cache_size": 16 00:21:07.508 } 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "method": "bdev_raid_set_options", 00:21:07.508 "params": { 00:21:07.508 "process_window_size_kb": 1024 00:21:07.508 } 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "method": "bdev_iscsi_set_options", 00:21:07.508 "params": { 00:21:07.508 "timeout_sec": 30 00:21:07.508 } 00:21:07.508 }, 00:21:07.508 { 00:21:07.508 "method": "bdev_nvme_set_options", 00:21:07.508 "params": { 00:21:07.508 "action_on_timeout": "none", 00:21:07.508 "timeout_us": 0, 00:21:07.508 "timeout_admin_us": 0, 00:21:07.508 "keep_alive_timeout_ms": 10000, 00:21:07.508 "arbitration_burst": 0, 00:21:07.508 "low_priority_weight": 0, 00:21:07.508 "medium_priority_weight": 0, 00:21:07.508 "high_priority_weight": 0, 00:21:07.508 "nvme_adminq_poll_period_us": 10000, 00:21:07.508 "nvme_ioq_poll_period_us": 0, 00:21:07.508 "io_queue_requests": 512, 00:21:07.508 "delay_cmd_submit": true, 00:21:07.509 "transport_retry_count": 4, 00:21:07.509 "bdev_retry_count": 3, 00:21:07.509 "transport_ack_timeout": 0, 00:21:07.509 "ctrlr_loss_timeout_sec": 0, 00:21:07.509 "reconnect_delay_sec": 0, 00:21:07.509 "fast_io_fail_timeout_sec": 0, 00:21:07.509 "disable_auto_failback": false, 00:21:07.509 "generate_uuids": false, 00:21:07.509 "transport_tos": 0, 00:21:07.509 "nvme_error_stat": false, 00:21:07.509 "rdma_srq_size": 0, 00:21:07.509 "io_path_stat": false, 00:21:07.509 "allow_accel_sequence": false, 00:21:07.509 "rdma_max_cq_size": 0, 00:21:07.509 "rdma_cm_event_timeout_ms": 0, 00:21:07.509 "dhchap_digests": [ 00:21:07.509 "sha256", 00:21:07.509 "sha384", 00:21:07.509 "sha512" 00:21:07.509 ], 00:21:07.509 "dhchap_dhgroups": [ 00:21:07.509 "null", 00:21:07.509 "ffdhe2048", 00:21:07.509 "ffdhe3072", 00:21:07.509 "ffdhe4096", 00:21:07.509 "ffdhe6144", 00:21:07.509 "ffdhe8192" 00:21:07.509 ] 00:21:07.509 } 00:21:07.509 }, 00:21:07.509 { 00:21:07.509 "method": "bdev_nvme_attach_controller", 00:21:07.509 "params": { 00:21:07.509 "name": "TLSTEST", 00:21:07.509 "trtype": "TCP", 00:21:07.509 "adrfam": "IPv4", 00:21:07.509 "traddr": "10.0.0.2", 00:21:07.509 "trsvcid": "4420", 00:21:07.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.509 "prchk_reftag": false, 00:21:07.509 "prchk_guard": false, 00:21:07.509 "ctrlr_loss_timeout_sec": 0, 00:21:07.509 "reconnect_delay_sec": 0, 00:21:07.509 "fast_io_fail_timeout_sec": 0, 00:21:07.509 "psk": "/tmp/tmp.NTnb15z3Uz", 00:21:07.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.509 "hdgst": false, 00:21:07.509 "ddgst": false 00:21:07.509 } 00:21:07.509 }, 00:21:07.509 { 00:21:07.509 "method": "bdev_nvme_set_hotplug", 00:21:07.509 "params": { 00:21:07.509 "period_us": 100000, 00:21:07.509 "enable": false 00:21:07.509 } 00:21:07.509 }, 00:21:07.509 { 00:21:07.509 "method": "bdev_wait_for_examine" 00:21:07.509 } 00:21:07.509 ] 00:21:07.509 }, 00:21:07.509 { 00:21:07.509 "subsystem": "nbd", 00:21:07.509 "config": [] 00:21:07.509 } 00:21:07.509 ] 00:21:07.509 }' 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 83176 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83176 ']' 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83176 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83176 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83176' 00:21:07.509 killing process with pid 83176 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83176 00:21:07.509 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.509 00:21:07.509 Latency(us) 00:21:07.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.509 =================================================================================================================== 00:21:07.509 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:07.509 [2024-07-15 20:19:32.786742] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:07.509 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83176 00:21:07.768 20:19:32 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 82879 00:21:07.768 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 82879 ']' 00:21:07.768 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 82879 00:21:07.768 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:07.768 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.768 20:19:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82879 00:21:07.768 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:07.768 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:07.768 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82879' 00:21:07.768 killing process with pid 82879 00:21:07.768 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 82879 00:21:07.768 [2024-07-15 20:19:33.009050] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:07.768 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 82879 00:21:08.028 20:19:33 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:08.028 20:19:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.028 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.028 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.028 20:19:33 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:08.028 "subsystems": [ 00:21:08.028 { 00:21:08.028 "subsystem": "keyring", 00:21:08.028 "config": [] 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "subsystem": "iobuf", 00:21:08.028 "config": [ 00:21:08.028 { 00:21:08.028 "method": "iobuf_set_options", 00:21:08.028 "params": { 00:21:08.028 "small_pool_count": 8192, 00:21:08.028 "large_pool_count": 1024, 00:21:08.028 "small_bufsize": 8192, 00:21:08.028 "large_bufsize": 135168 00:21:08.028 } 00:21:08.028 } 00:21:08.028 ] 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "subsystem": "sock", 00:21:08.028 "config": [ 00:21:08.028 { 00:21:08.028 "method": "sock_set_default_impl", 00:21:08.028 "params": { 00:21:08.028 "impl_name": "posix" 00:21:08.028 } 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "method": "sock_impl_set_options", 00:21:08.028 "params": { 00:21:08.028 "impl_name": "ssl", 00:21:08.028 "recv_buf_size": 4096, 00:21:08.028 "send_buf_size": 4096, 00:21:08.028 "enable_recv_pipe": true, 00:21:08.028 "enable_quickack": false, 00:21:08.028 "enable_placement_id": 0, 00:21:08.028 "enable_zerocopy_send_server": true, 00:21:08.028 "enable_zerocopy_send_client": false, 00:21:08.028 "zerocopy_threshold": 0, 00:21:08.028 "tls_version": 0, 00:21:08.028 "enable_ktls": false 00:21:08.028 } 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "method": "sock_impl_set_options", 00:21:08.028 "params": { 00:21:08.028 "impl_name": "posix", 00:21:08.028 "recv_buf_size": 2097152, 00:21:08.028 "send_buf_size": 2097152, 00:21:08.028 "enable_recv_pipe": true, 00:21:08.028 "enable_quickack": false, 00:21:08.028 "enable_placement_id": 0, 00:21:08.028 "enable_zerocopy_send_server": true, 00:21:08.028 "enable_zerocopy_send_client": false, 00:21:08.028 "zerocopy_threshold": 0, 00:21:08.028 "tls_version": 0, 00:21:08.028 "enable_ktls": false 00:21:08.028 } 00:21:08.028 } 00:21:08.028 ] 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "subsystem": "vmd", 00:21:08.028 "config": [] 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "subsystem": "accel", 00:21:08.028 "config": [ 00:21:08.028 { 00:21:08.028 "method": "accel_set_options", 00:21:08.028 "params": { 00:21:08.028 "small_cache_size": 128, 00:21:08.028 "large_cache_size": 16, 00:21:08.028 "task_count": 2048, 00:21:08.028 "sequence_count": 2048, 00:21:08.028 "buf_count": 2048 00:21:08.028 } 00:21:08.028 } 00:21:08.028 ] 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "subsystem": "bdev", 00:21:08.028 "config": [ 00:21:08.028 { 00:21:08.028 "method": "bdev_set_options", 00:21:08.028 "params": { 00:21:08.028 "bdev_io_pool_size": 65535, 00:21:08.028 "bdev_io_cache_size": 256, 00:21:08.028 "bdev_auto_examine": true, 00:21:08.028 "iobuf_small_cache_size": 128, 00:21:08.028 "iobuf_large_cache_size": 16 00:21:08.028 } 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "method": "bdev_raid_set_options", 00:21:08.028 "params": { 00:21:08.028 "process_window_size_kb": 1024 00:21:08.028 } 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "method": "bdev_iscsi_set_options", 00:21:08.028 "params": { 00:21:08.028 "timeout_sec": 30 00:21:08.028 } 00:21:08.028 }, 00:21:08.028 { 00:21:08.028 "method": "bdev_nvme_set_options", 00:21:08.028 "params": { 00:21:08.028 "action_on_timeout": "none", 00:21:08.028 "timeout_us": 0, 00:21:08.028 "timeout_admin_us": 0, 00:21:08.028 "keep_alive_timeout_ms": 10000, 00:21:08.028 "arbitration_burst": 0, 00:21:08.028 "low_priority_weight": 0, 00:21:08.028 "medium_priority_weight": 0, 00:21:08.028 "high_priority_weight": 0, 00:21:08.028 "nvme_adminq_poll_period_us": 10000, 00:21:08.028 "nvme_ioq_poll_period_us": 0, 00:21:08.028 "io_queue_requests": 0, 00:21:08.028 "delay_cmd_submit": true, 00:21:08.028 "transport_retry_count": 4, 00:21:08.028 "bdev_retry_count": 3, 00:21:08.028 "transport_ack_timeout": 0, 00:21:08.028 "ctrlr_loss_timeout_sec": 0, 00:21:08.028 "reconnect_delay_sec": 0, 00:21:08.029 "fast_io_fail_timeout_sec": 0, 00:21:08.029 "disable_auto_failback": false, 00:21:08.029 "generate_uuids": false, 00:21:08.029 "transport_tos": 0, 00:21:08.029 "nvme_error_stat": false, 00:21:08.029 "rdma_srq_size": 0, 00:21:08.029 "io_path_stat": false, 00:21:08.029 "allow_accel_sequence": false, 00:21:08.029 "rdma_max_cq_size": 0, 00:21:08.029 "rdma_cm_event_timeout_ms": 0, 00:21:08.029 "dhchap_digests": [ 00:21:08.029 "sha256", 00:21:08.029 "sha384", 00:21:08.029 "sha512" 00:21:08.029 ], 00:21:08.029 "dhchap_dhgroups": [ 00:21:08.029 "null", 00:21:08.029 "ffdhe2048", 00:21:08.029 "ffdhe3072", 00:21:08.029 "ffdhe4096", 00:21:08.029 "ffdhe6144", 00:21:08.029 "ffdhe8192" 00:21:08.029 ] 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "bdev_nvme_set_hotplug", 00:21:08.029 "params": { 00:21:08.029 "period_us": 100000, 00:21:08.029 "enable": false 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "bdev_malloc_create", 00:21:08.029 "params": { 00:21:08.029 "name": "malloc0", 00:21:08.029 "num_blocks": 8192, 00:21:08.029 "block_size": 4096, 00:21:08.029 "physical_block_size": 4096, 00:21:08.029 "uuid": "3ceaa36d-c18a-40f2-92e3-c11ecdeacb68", 00:21:08.029 "optimal_io_boundary": 0 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "bdev_wait_for_examine" 00:21:08.029 } 00:21:08.029 ] 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "subsystem": "nbd", 00:21:08.029 "config": [] 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "subsystem": "scheduler", 00:21:08.029 "config": [ 00:21:08.029 { 00:21:08.029 "method": "framework_set_scheduler", 00:21:08.029 "params": { 00:21:08.029 "name": "static" 00:21:08.029 } 00:21:08.029 } 00:21:08.029 ] 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "subsystem": "nvmf", 00:21:08.029 "config": [ 00:21:08.029 { 00:21:08.029 "method": "nvmf_set_config", 00:21:08.029 "params": { 00:21:08.029 "discovery_filter": "match_any", 00:21:08.029 "admin_cmd_passthru": { 00:21:08.029 "identify_ctrlr": false 00:21:08.029 } 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_set_max_subsystems", 00:21:08.029 "params": { 00:21:08.029 "max_subsystems": 1024 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_set_crdt", 00:21:08.029 "params": { 00:21:08.029 "crdt1": 0, 00:21:08.029 "crdt2": 0, 00:21:08.029 "crdt3": 0 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_create_transport", 00:21:08.029 "params": { 00:21:08.029 "trtype": "TCP", 00:21:08.029 "max_queue_depth": 128, 00:21:08.029 "max_io_qpairs_per_ctrlr": 127, 00:21:08.029 "in_capsule_data_size": 4096, 00:21:08.029 "max_io_size": 131072, 00:21:08.029 "io_unit_size": 131072, 00:21:08.029 "max_aq_depth": 128, 00:21:08.029 "num_shared_buffers": 511, 00:21:08.029 "buf_cache_size": 4294967295, 00:21:08.029 "dif_insert_or_strip": false, 00:21:08.029 "zcopy": false, 00:21:08.029 "c2h_success": false, 00:21:08.029 "sock_priority": 0, 00:21:08.029 "abort_timeout_sec": 1, 00:21:08.029 "ack_timeout": 0, 00:21:08.029 "data_wr_pool_size": 0 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_create_subsystem", 00:21:08.029 "params": { 00:21:08.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.029 "allow_any_host": false, 00:21:08.029 "serial_number": "SPDK00000000000001", 00:21:08.029 "model_number": "SPDK bdev Controller", 00:21:08.029 "max_namespaces": 10, 00:21:08.029 "min_cntlid": 1, 00:21:08.029 "max_cntlid": 65519, 00:21:08.029 "ana_reporting": false 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_subsystem_add_host", 00:21:08.029 "params": { 00:21:08.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.029 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.029 "psk": "/tmp/tmp.NTnb15z3Uz" 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_subsystem_add_ns", 00:21:08.029 "params": { 00:21:08.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.029 "namespace": { 00:21:08.029 "nsid": 1, 00:21:08.029 "bdev_name": "malloc0", 00:21:08.029 "nguid": "3CEAA36DC18A40F292E3C11ECDEACB68", 00:21:08.029 "uuid": "3ceaa36d-c18a-40f2-92e3-c11ecdeacb68", 00:21:08.029 "no_auto_visible": false 00:21:08.029 } 00:21:08.029 } 00:21:08.029 }, 00:21:08.029 { 00:21:08.029 "method": "nvmf_subsystem_add_listener", 00:21:08.029 "params": { 00:21:08.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.029 "listen_address": { 00:21:08.029 "trtype": "TCP", 00:21:08.029 "adrfam": "IPv4", 00:21:08.029 "traddr": "10.0.0.2", 00:21:08.029 "trsvcid": "4420" 00:21:08.029 }, 00:21:08.029 "secure_channel": true 00:21:08.029 } 00:21:08.029 } 00:21:08.029 ] 00:21:08.029 } 00:21:08.029 ] 00:21:08.029 }' 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83571 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83571 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83571 ']' 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.029 20:19:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.029 [2024-07-15 20:19:33.285170] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:08.029 [2024-07-15 20:19:33.285229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.029 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.029 [2024-07-15 20:19:33.361164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.288 [2024-07-15 20:19:33.450797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.288 [2024-07-15 20:19:33.450840] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.288 [2024-07-15 20:19:33.450851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.288 [2024-07-15 20:19:33.450860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.288 [2024-07-15 20:19:33.450868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.288 [2024-07-15 20:19:33.450928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.547 [2024-07-15 20:19:33.660520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.547 [2024-07-15 20:19:33.676445] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:08.547 [2024-07-15 20:19:33.692507] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.547 [2024-07-15 20:19:33.703514] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=83734 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 83734 /var/tmp/bdevperf.sock 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83734 ']' 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.116 20:19:34 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:09.116 "subsystems": [ 00:21:09.116 { 00:21:09.116 "subsystem": "keyring", 00:21:09.116 "config": [] 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "subsystem": "iobuf", 00:21:09.116 "config": [ 00:21:09.116 { 00:21:09.116 "method": "iobuf_set_options", 00:21:09.116 "params": { 00:21:09.116 "small_pool_count": 8192, 00:21:09.116 "large_pool_count": 1024, 00:21:09.116 "small_bufsize": 8192, 00:21:09.116 "large_bufsize": 135168 00:21:09.116 } 00:21:09.116 } 00:21:09.116 ] 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "subsystem": "sock", 00:21:09.116 "config": [ 00:21:09.116 { 00:21:09.116 "method": "sock_set_default_impl", 00:21:09.116 "params": { 00:21:09.116 "impl_name": "posix" 00:21:09.116 } 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "method": "sock_impl_set_options", 00:21:09.116 "params": { 00:21:09.116 "impl_name": "ssl", 00:21:09.116 "recv_buf_size": 4096, 00:21:09.116 "send_buf_size": 4096, 00:21:09.116 "enable_recv_pipe": true, 00:21:09.116 "enable_quickack": false, 00:21:09.116 "enable_placement_id": 0, 00:21:09.116 "enable_zerocopy_send_server": true, 00:21:09.116 "enable_zerocopy_send_client": false, 00:21:09.116 "zerocopy_threshold": 0, 00:21:09.116 "tls_version": 0, 00:21:09.116 "enable_ktls": false 00:21:09.116 } 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "method": "sock_impl_set_options", 00:21:09.116 "params": { 00:21:09.116 "impl_name": "posix", 00:21:09.116 "recv_buf_size": 2097152, 00:21:09.116 "send_buf_size": 2097152, 00:21:09.116 "enable_recv_pipe": true, 00:21:09.116 "enable_quickack": false, 00:21:09.116 "enable_placement_id": 0, 00:21:09.116 "enable_zerocopy_send_server": true, 00:21:09.116 "enable_zerocopy_send_client": false, 00:21:09.116 "zerocopy_threshold": 0, 00:21:09.116 "tls_version": 0, 00:21:09.116 "enable_ktls": false 00:21:09.116 } 00:21:09.116 } 00:21:09.116 ] 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "subsystem": "vmd", 00:21:09.116 "config": [] 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "subsystem": "accel", 00:21:09.116 "config": [ 00:21:09.116 { 00:21:09.116 "method": "accel_set_options", 00:21:09.116 "params": { 00:21:09.116 "small_cache_size": 128, 00:21:09.116 "large_cache_size": 16, 00:21:09.116 "task_count": 2048, 00:21:09.116 "sequence_count": 2048, 00:21:09.116 "buf_count": 2048 00:21:09.116 } 00:21:09.116 } 00:21:09.116 ] 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "subsystem": "bdev", 00:21:09.116 "config": [ 00:21:09.116 { 00:21:09.116 "method": "bdev_set_options", 00:21:09.116 "params": { 00:21:09.116 "bdev_io_pool_size": 65535, 00:21:09.116 "bdev_io_cache_size": 256, 00:21:09.116 "bdev_auto_examine": true, 00:21:09.116 "iobuf_small_cache_size": 128, 00:21:09.116 "iobuf_large_cache_size": 16 00:21:09.116 } 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "method": "bdev_raid_set_options", 00:21:09.116 "params": { 00:21:09.116 "process_window_size_kb": 1024 00:21:09.116 } 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "method": "bdev_iscsi_set_options", 00:21:09.116 "params": { 00:21:09.116 "timeout_sec": 30 00:21:09.116 } 00:21:09.116 }, 00:21:09.116 { 00:21:09.116 "method": "bdev_nvme_set_options", 00:21:09.116 "params": { 00:21:09.116 "action_on_timeout": "none", 00:21:09.116 "timeout_us": 0, 00:21:09.117 "timeout_admin_us": 0, 00:21:09.117 "keep_alive_timeout_ms": 10000, 00:21:09.117 "arbitration_burst": 0, 00:21:09.117 "low_priority_weight": 0, 00:21:09.117 "medium_priority_weight": 0, 00:21:09.117 "high_priority_weight": 0, 00:21:09.117 "nvme_adminq_poll_period_us": 10000, 00:21:09.117 "nvme_ioq_poll_period_us": 0, 00:21:09.117 "io_queue_requests": 512, 00:21:09.117 "delay_cmd_submit": true, 00:21:09.117 "transport_retry_count": 4, 00:21:09.117 "bdev_retry_count": 3, 00:21:09.117 "transport_ack_timeout": 0, 00:21:09.117 "ctrlr_loss_timeout_sec": 0, 00:21:09.117 "reconnect_delay_sec": 0, 00:21:09.117 "fast_io_fail_timeout_sec": 0, 00:21:09.117 "disable_auto_failback": false, 00:21:09.117 "generate_uuids": false, 00:21:09.117 "transport_tos": 0, 00:21:09.117 "nvme_error_stat": false, 00:21:09.117 "rdma_srq_size": 0, 00:21:09.117 "io_path_stat": false, 00:21:09.117 "allow_accel_sequence": false, 00:21:09.117 "rdma_max_cq_size": 0, 00:21:09.117 "rdma_cm_event_timeout_ms": 0, 00:21:09.117 "dhchap_digests": [ 00:21:09.117 "sha256", 00:21:09.117 "sha384", 00:21:09.117 "sha512" 00:21:09.117 ], 00:21:09.117 "dhchap_dhgroups": [ 00:21:09.117 "null", 00:21:09.117 "ffdhe2048", 00:21:09.117 "ffdhe3072", 00:21:09.117 "ffdhe4096", 00:21:09.117 "ffdhe6144", 00:21:09.117 "ffdhe8192" 00:21:09.117 ] 00:21:09.117 } 00:21:09.117 }, 00:21:09.117 { 00:21:09.117 "method": "bdev_nvme_attach_controller", 00:21:09.117 "params": { 00:21:09.117 "name": "TLSTEST", 00:21:09.117 "trtype": "TCP", 00:21:09.117 "adrfam": "IPv4", 00:21:09.117 "traddr": "10.0.0.2", 00:21:09.117 "trsvcid": "4420", 00:21:09.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.117 "prchk_reftag": false, 00:21:09.117 "prchk_guard": false, 00:21:09.117 "ctrlr_loss_timeout_sec": 0, 00:21:09.117 "reconnect_delay_sec": 0, 00:21:09.117 "fast_io_fail_timeout_sec": 0, 00:21:09.117 "psk": "/tmp/tmp.NTnb15z3Uz", 00:21:09.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.117 "hdgst": false, 00:21:09.117 "ddgst": false 00:21:09.117 } 00:21:09.117 }, 00:21:09.117 { 00:21:09.117 "method": "bdev_nvme_set_hotplug", 00:21:09.117 "params": { 00:21:09.117 "period_us": 100000, 00:21:09.117 "enable": false 00:21:09.117 } 00:21:09.117 }, 00:21:09.117 { 00:21:09.117 "method": "bdev_wait_for_examine" 00:21:09.117 } 00:21:09.117 ] 00:21:09.117 }, 00:21:09.117 { 00:21:09.117 "subsystem": "nbd", 00:21:09.117 "config": [] 00:21:09.117 } 00:21:09.117 ] 00:21:09.117 }' 00:21:09.117 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.117 20:19:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.117 [2024-07-15 20:19:34.307016] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:09.117 [2024-07-15 20:19:34.307075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83734 ] 00:21:09.117 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.117 [2024-07-15 20:19:34.364864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.117 [2024-07-15 20:19:34.433794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.376 [2024-07-15 20:19:34.574873] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.376 [2024-07-15 20:19:34.574950] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:09.942 20:19:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.942 20:19:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.942 20:19:35 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:10.199 Running I/O for 10 seconds... 00:21:20.169 00:21:20.169 Latency(us) 00:21:20.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.170 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.170 Verification LBA range: start 0x0 length 0x2000 00:21:20.170 TLSTESTn1 : 10.02 5154.07 20.13 0.00 0.00 24794.44 6285.50 48854.11 00:21:20.170 =================================================================================================================== 00:21:20.170 Total : 5154.07 20.13 0.00 0.00 24794.44 6285.50 48854.11 00:21:20.170 0 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 83734 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83734 ']' 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83734 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83734 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83734' 00:21:20.170 killing process with pid 83734 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83734 00:21:20.170 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.170 00:21:20.170 Latency(us) 00:21:20.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.170 =================================================================================================================== 00:21:20.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.170 [2024-07-15 20:19:45.473527] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.170 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83734 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 83571 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83571 ']' 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83571 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83571 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83571' 00:21:20.428 killing process with pid 83571 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83571 00:21:20.428 [2024-07-15 20:19:45.697547] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:20.428 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83571 00:21:20.687 20:19:45 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:20.687 20:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85839 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85839 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85839 ']' 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.688 20:19:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.688 [2024-07-15 20:19:45.973769] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:20.688 [2024-07-15 20:19:45.973826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.688 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.946 [2024-07-15 20:19:46.061091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.946 [2024-07-15 20:19:46.146580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.946 [2024-07-15 20:19:46.146624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.946 [2024-07-15 20:19:46.146635] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.946 [2024-07-15 20:19:46.146643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.946 [2024-07-15 20:19:46.146650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.946 [2024-07-15 20:19:46.146678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.NTnb15z3Uz 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NTnb15z3Uz 00:21:21.883 20:19:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.883 [2024-07-15 20:19:47.172745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.883 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.141 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.400 [2024-07-15 20:19:47.662033] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.400 [2024-07-15 20:19:47.662229] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.400 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.658 malloc0 00:21:22.658 20:19:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.917 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NTnb15z3Uz 00:21:23.177 [2024-07-15 20:19:48.409320] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=86384 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 86384 /var/tmp/bdevperf.sock 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86384 ']' 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.177 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.177 [2024-07-15 20:19:48.478833] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:23.177 [2024-07-15 20:19:48.478892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86384 ] 00:21:23.177 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.435 [2024-07-15 20:19:48.550778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.435 [2024-07-15 20:19:48.639029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.435 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.435 20:19:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.435 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NTnb15z3Uz 00:21:23.694 20:19:48 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.952 [2024-07-15 20:19:49.191489] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.952 nvme0n1 00:21:23.952 20:19:49 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.211 Running I/O for 1 seconds... 00:21:25.148 00:21:25.148 Latency(us) 00:21:25.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.148 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:25.148 Verification LBA range: start 0x0 length 0x2000 00:21:25.148 nvme0n1 : 1.02 3646.18 14.24 0.00 0.00 34709.81 8996.31 31218.97 00:21:25.148 =================================================================================================================== 00:21:25.148 Total : 3646.18 14.24 0.00 0.00 34709.81 8996.31 31218.97 00:21:25.148 0 00:21:25.148 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 86384 00:21:25.148 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86384 ']' 00:21:25.148 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86384 00:21:25.148 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86384 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86384' 00:21:25.149 killing process with pid 86384 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86384 00:21:25.149 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.149 00:21:25.149 Latency(us) 00:21:25.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.149 =================================================================================================================== 00:21:25.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.149 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86384 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85839 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85839 ']' 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85839 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85839 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85839' 00:21:25.408 killing process with pid 85839 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85839 00:21:25.408 [2024-07-15 20:19:50.724028] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.408 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85839 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86683 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86683 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86683 ']' 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.667 20:19:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.667 [2024-07-15 20:19:50.999834] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:25.667 [2024-07-15 20:19:50.999893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.927 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.927 [2024-07-15 20:19:51.085364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.927 [2024-07-15 20:19:51.179821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.927 [2024-07-15 20:19:51.179861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.927 [2024-07-15 20:19:51.179871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.927 [2024-07-15 20:19:51.179882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.927 [2024-07-15 20:19:51.179890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.927 [2024-07-15 20:19:51.179917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.864 20:19:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.864 [2024-07-15 20:19:51.981731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.864 malloc0 00:21:26.864 [2024-07-15 20:19:52.011067] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.864 [2024-07-15 20:19:52.011282] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=86949 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 86949 /var/tmp/bdevperf.sock 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86949 ']' 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.864 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.864 [2024-07-15 20:19:52.086882] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:26.864 [2024-07-15 20:19:52.086937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86949 ] 00:21:26.864 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.864 [2024-07-15 20:19:52.159178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.122 [2024-07-15 20:19:52.250286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.122 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.122 20:19:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.122 20:19:52 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NTnb15z3Uz 00:21:27.380 20:19:52 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:27.638 [2024-07-15 20:19:52.814181] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.638 nvme0n1 00:21:27.638 20:19:52 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.896 Running I/O for 1 seconds... 00:21:28.831 00:21:28.831 Latency(us) 00:21:28.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.831 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.831 Verification LBA range: start 0x0 length 0x2000 00:21:28.831 nvme0n1 : 1.04 3501.40 13.68 0.00 0.00 35942.74 7119.59 48615.80 00:21:28.831 =================================================================================================================== 00:21:28.831 Total : 3501.40 13.68 0.00 0.00 35942.74 7119.59 48615.80 00:21:28.831 0 00:21:28.831 20:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:28.831 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.831 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.089 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.089 20:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:29.089 "subsystems": [ 00:21:29.089 { 00:21:29.089 "subsystem": "keyring", 00:21:29.089 "config": [ 00:21:29.089 { 00:21:29.089 "method": "keyring_file_add_key", 00:21:29.089 "params": { 00:21:29.089 "name": "key0", 00:21:29.089 "path": "/tmp/tmp.NTnb15z3Uz" 00:21:29.089 } 00:21:29.089 } 00:21:29.089 ] 00:21:29.089 }, 00:21:29.089 { 00:21:29.089 "subsystem": "iobuf", 00:21:29.089 "config": [ 00:21:29.089 { 00:21:29.089 "method": "iobuf_set_options", 00:21:29.089 "params": { 00:21:29.089 "small_pool_count": 8192, 00:21:29.089 "large_pool_count": 1024, 00:21:29.089 "small_bufsize": 8192, 00:21:29.089 "large_bufsize": 135168 00:21:29.089 } 00:21:29.089 } 00:21:29.089 ] 00:21:29.089 }, 00:21:29.089 { 00:21:29.089 "subsystem": "sock", 00:21:29.089 "config": [ 00:21:29.089 { 00:21:29.089 "method": "sock_set_default_impl", 00:21:29.089 "params": { 00:21:29.089 "impl_name": "posix" 00:21:29.089 } 00:21:29.089 }, 00:21:29.089 { 00:21:29.089 "method": "sock_impl_set_options", 00:21:29.089 "params": { 00:21:29.089 "impl_name": "ssl", 00:21:29.089 "recv_buf_size": 4096, 00:21:29.089 "send_buf_size": 4096, 00:21:29.089 "enable_recv_pipe": true, 00:21:29.090 "enable_quickack": false, 00:21:29.090 "enable_placement_id": 0, 00:21:29.090 "enable_zerocopy_send_server": true, 00:21:29.090 "enable_zerocopy_send_client": false, 00:21:29.090 "zerocopy_threshold": 0, 00:21:29.090 "tls_version": 0, 00:21:29.090 "enable_ktls": false 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "sock_impl_set_options", 00:21:29.090 "params": { 00:21:29.090 "impl_name": "posix", 00:21:29.090 "recv_buf_size": 2097152, 00:21:29.090 "send_buf_size": 2097152, 00:21:29.090 "enable_recv_pipe": true, 00:21:29.090 "enable_quickack": false, 00:21:29.090 "enable_placement_id": 0, 00:21:29.090 "enable_zerocopy_send_server": true, 00:21:29.090 "enable_zerocopy_send_client": false, 00:21:29.090 "zerocopy_threshold": 0, 00:21:29.090 "tls_version": 0, 00:21:29.090 "enable_ktls": false 00:21:29.090 } 00:21:29.090 } 00:21:29.090 ] 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "subsystem": "vmd", 00:21:29.090 "config": [] 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "subsystem": "accel", 00:21:29.090 "config": [ 00:21:29.090 { 00:21:29.090 "method": "accel_set_options", 00:21:29.090 "params": { 00:21:29.090 "small_cache_size": 128, 00:21:29.090 "large_cache_size": 16, 00:21:29.090 "task_count": 2048, 00:21:29.090 "sequence_count": 2048, 00:21:29.090 "buf_count": 2048 00:21:29.090 } 00:21:29.090 } 00:21:29.090 ] 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "subsystem": "bdev", 00:21:29.090 "config": [ 00:21:29.090 { 00:21:29.090 "method": "bdev_set_options", 00:21:29.090 "params": { 00:21:29.090 "bdev_io_pool_size": 65535, 00:21:29.090 "bdev_io_cache_size": 256, 00:21:29.090 "bdev_auto_examine": true, 00:21:29.090 "iobuf_small_cache_size": 128, 00:21:29.090 "iobuf_large_cache_size": 16 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "bdev_raid_set_options", 00:21:29.090 "params": { 00:21:29.090 "process_window_size_kb": 1024 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "bdev_iscsi_set_options", 00:21:29.090 "params": { 00:21:29.090 "timeout_sec": 30 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "bdev_nvme_set_options", 00:21:29.090 "params": { 00:21:29.090 "action_on_timeout": "none", 00:21:29.090 "timeout_us": 0, 00:21:29.090 "timeout_admin_us": 0, 00:21:29.090 "keep_alive_timeout_ms": 10000, 00:21:29.090 "arbitration_burst": 0, 00:21:29.090 "low_priority_weight": 0, 00:21:29.090 "medium_priority_weight": 0, 00:21:29.090 "high_priority_weight": 0, 00:21:29.090 "nvme_adminq_poll_period_us": 10000, 00:21:29.090 "nvme_ioq_poll_period_us": 0, 00:21:29.090 "io_queue_requests": 0, 00:21:29.090 "delay_cmd_submit": true, 00:21:29.090 "transport_retry_count": 4, 00:21:29.090 "bdev_retry_count": 3, 00:21:29.090 "transport_ack_timeout": 0, 00:21:29.090 "ctrlr_loss_timeout_sec": 0, 00:21:29.090 "reconnect_delay_sec": 0, 00:21:29.090 "fast_io_fail_timeout_sec": 0, 00:21:29.090 "disable_auto_failback": false, 00:21:29.090 "generate_uuids": false, 00:21:29.090 "transport_tos": 0, 00:21:29.090 "nvme_error_stat": false, 00:21:29.090 "rdma_srq_size": 0, 00:21:29.090 "io_path_stat": false, 00:21:29.090 "allow_accel_sequence": false, 00:21:29.090 "rdma_max_cq_size": 0, 00:21:29.090 "rdma_cm_event_timeout_ms": 0, 00:21:29.090 "dhchap_digests": [ 00:21:29.090 "sha256", 00:21:29.090 "sha384", 00:21:29.090 "sha512" 00:21:29.090 ], 00:21:29.090 "dhchap_dhgroups": [ 00:21:29.090 "null", 00:21:29.090 "ffdhe2048", 00:21:29.090 "ffdhe3072", 00:21:29.090 "ffdhe4096", 00:21:29.090 "ffdhe6144", 00:21:29.090 "ffdhe8192" 00:21:29.090 ] 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "bdev_nvme_set_hotplug", 00:21:29.090 "params": { 00:21:29.090 "period_us": 100000, 00:21:29.090 "enable": false 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "bdev_malloc_create", 00:21:29.090 "params": { 00:21:29.090 "name": "malloc0", 00:21:29.090 "num_blocks": 8192, 00:21:29.090 "block_size": 4096, 00:21:29.090 "physical_block_size": 4096, 00:21:29.090 "uuid": "3a5776cd-c5fe-46d5-9c88-16deb73693ca", 00:21:29.090 "optimal_io_boundary": 0 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "bdev_wait_for_examine" 00:21:29.090 } 00:21:29.090 ] 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "subsystem": "nbd", 00:21:29.090 "config": [] 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "subsystem": "scheduler", 00:21:29.090 "config": [ 00:21:29.090 { 00:21:29.090 "method": "framework_set_scheduler", 00:21:29.090 "params": { 00:21:29.090 "name": "static" 00:21:29.090 } 00:21:29.090 } 00:21:29.090 ] 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "subsystem": "nvmf", 00:21:29.090 "config": [ 00:21:29.090 { 00:21:29.090 "method": "nvmf_set_config", 00:21:29.090 "params": { 00:21:29.090 "discovery_filter": "match_any", 00:21:29.090 "admin_cmd_passthru": { 00:21:29.090 "identify_ctrlr": false 00:21:29.090 } 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "nvmf_set_max_subsystems", 00:21:29.090 "params": { 00:21:29.090 "max_subsystems": 1024 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "nvmf_set_crdt", 00:21:29.090 "params": { 00:21:29.090 "crdt1": 0, 00:21:29.090 "crdt2": 0, 00:21:29.090 "crdt3": 0 00:21:29.090 } 00:21:29.090 }, 00:21:29.090 { 00:21:29.090 "method": "nvmf_create_transport", 00:21:29.090 "params": { 00:21:29.090 "trtype": "TCP", 00:21:29.090 "max_queue_depth": 128, 00:21:29.090 "max_io_qpairs_per_ctrlr": 127, 00:21:29.090 "in_capsule_data_size": 4096, 00:21:29.090 "max_io_size": 131072, 00:21:29.090 "io_unit_size": 131072, 00:21:29.090 "max_aq_depth": 128, 00:21:29.090 "num_shared_buffers": 511, 00:21:29.090 "buf_cache_size": 4294967295, 00:21:29.090 "dif_insert_or_strip": false, 00:21:29.090 "zcopy": false, 00:21:29.090 "c2h_success": false, 00:21:29.090 "sock_priority": 0, 00:21:29.090 "abort_timeout_sec": 1, 00:21:29.091 "ack_timeout": 0, 00:21:29.091 "data_wr_pool_size": 0 00:21:29.091 } 00:21:29.091 }, 00:21:29.091 { 00:21:29.091 "method": "nvmf_create_subsystem", 00:21:29.091 "params": { 00:21:29.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.091 "allow_any_host": false, 00:21:29.091 "serial_number": "00000000000000000000", 00:21:29.091 "model_number": "SPDK bdev Controller", 00:21:29.091 "max_namespaces": 32, 00:21:29.091 "min_cntlid": 1, 00:21:29.091 "max_cntlid": 65519, 00:21:29.091 "ana_reporting": false 00:21:29.091 } 00:21:29.091 }, 00:21:29.091 { 00:21:29.091 "method": "nvmf_subsystem_add_host", 00:21:29.091 "params": { 00:21:29.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.091 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.091 "psk": "key0" 00:21:29.091 } 00:21:29.091 }, 00:21:29.091 { 00:21:29.091 "method": "nvmf_subsystem_add_ns", 00:21:29.091 "params": { 00:21:29.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.091 "namespace": { 00:21:29.091 "nsid": 1, 00:21:29.091 "bdev_name": "malloc0", 00:21:29.091 "nguid": "3A5776CDC5FE46D59C8816DEB73693CA", 00:21:29.091 "uuid": "3a5776cd-c5fe-46d5-9c88-16deb73693ca", 00:21:29.091 "no_auto_visible": false 00:21:29.091 } 00:21:29.091 } 00:21:29.091 }, 00:21:29.091 { 00:21:29.091 "method": "nvmf_subsystem_add_listener", 00:21:29.091 "params": { 00:21:29.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.091 "listen_address": { 00:21:29.091 "trtype": "TCP", 00:21:29.091 "adrfam": "IPv4", 00:21:29.091 "traddr": "10.0.0.2", 00:21:29.091 "trsvcid": "4420" 00:21:29.091 }, 00:21:29.091 "secure_channel": false, 00:21:29.091 "sock_impl": "ssl" 00:21:29.091 } 00:21:29.091 } 00:21:29.091 ] 00:21:29.091 } 00:21:29.091 ] 00:21:29.091 }' 00:21:29.091 20:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:29.350 20:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:29.350 "subsystems": [ 00:21:29.350 { 00:21:29.350 "subsystem": "keyring", 00:21:29.350 "config": [ 00:21:29.350 { 00:21:29.350 "method": "keyring_file_add_key", 00:21:29.350 "params": { 00:21:29.350 "name": "key0", 00:21:29.350 "path": "/tmp/tmp.NTnb15z3Uz" 00:21:29.350 } 00:21:29.350 } 00:21:29.350 ] 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "subsystem": "iobuf", 00:21:29.350 "config": [ 00:21:29.350 { 00:21:29.350 "method": "iobuf_set_options", 00:21:29.350 "params": { 00:21:29.350 "small_pool_count": 8192, 00:21:29.350 "large_pool_count": 1024, 00:21:29.350 "small_bufsize": 8192, 00:21:29.350 "large_bufsize": 135168 00:21:29.350 } 00:21:29.350 } 00:21:29.350 ] 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "subsystem": "sock", 00:21:29.350 "config": [ 00:21:29.350 { 00:21:29.350 "method": "sock_set_default_impl", 00:21:29.350 "params": { 00:21:29.350 "impl_name": "posix" 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "sock_impl_set_options", 00:21:29.350 "params": { 00:21:29.350 "impl_name": "ssl", 00:21:29.350 "recv_buf_size": 4096, 00:21:29.350 "send_buf_size": 4096, 00:21:29.350 "enable_recv_pipe": true, 00:21:29.350 "enable_quickack": false, 00:21:29.350 "enable_placement_id": 0, 00:21:29.350 "enable_zerocopy_send_server": true, 00:21:29.350 "enable_zerocopy_send_client": false, 00:21:29.350 "zerocopy_threshold": 0, 00:21:29.350 "tls_version": 0, 00:21:29.350 "enable_ktls": false 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "sock_impl_set_options", 00:21:29.350 "params": { 00:21:29.350 "impl_name": "posix", 00:21:29.350 "recv_buf_size": 2097152, 00:21:29.350 "send_buf_size": 2097152, 00:21:29.350 "enable_recv_pipe": true, 00:21:29.350 "enable_quickack": false, 00:21:29.350 "enable_placement_id": 0, 00:21:29.350 "enable_zerocopy_send_server": true, 00:21:29.350 "enable_zerocopy_send_client": false, 00:21:29.350 "zerocopy_threshold": 0, 00:21:29.350 "tls_version": 0, 00:21:29.350 "enable_ktls": false 00:21:29.350 } 00:21:29.350 } 00:21:29.350 ] 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "subsystem": "vmd", 00:21:29.350 "config": [] 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "subsystem": "accel", 00:21:29.350 "config": [ 00:21:29.350 { 00:21:29.350 "method": "accel_set_options", 00:21:29.350 "params": { 00:21:29.350 "small_cache_size": 128, 00:21:29.350 "large_cache_size": 16, 00:21:29.350 "task_count": 2048, 00:21:29.350 "sequence_count": 2048, 00:21:29.350 "buf_count": 2048 00:21:29.350 } 00:21:29.350 } 00:21:29.350 ] 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "subsystem": "bdev", 00:21:29.350 "config": [ 00:21:29.350 { 00:21:29.350 "method": "bdev_set_options", 00:21:29.350 "params": { 00:21:29.350 "bdev_io_pool_size": 65535, 00:21:29.350 "bdev_io_cache_size": 256, 00:21:29.350 "bdev_auto_examine": true, 00:21:29.350 "iobuf_small_cache_size": 128, 00:21:29.350 "iobuf_large_cache_size": 16 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_raid_set_options", 00:21:29.350 "params": { 00:21:29.350 "process_window_size_kb": 1024 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_iscsi_set_options", 00:21:29.350 "params": { 00:21:29.350 "timeout_sec": 30 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_nvme_set_options", 00:21:29.350 "params": { 00:21:29.350 "action_on_timeout": "none", 00:21:29.350 "timeout_us": 0, 00:21:29.350 "timeout_admin_us": 0, 00:21:29.350 "keep_alive_timeout_ms": 10000, 00:21:29.350 "arbitration_burst": 0, 00:21:29.350 "low_priority_weight": 0, 00:21:29.350 "medium_priority_weight": 0, 00:21:29.350 "high_priority_weight": 0, 00:21:29.350 "nvme_adminq_poll_period_us": 10000, 00:21:29.350 "nvme_ioq_poll_period_us": 0, 00:21:29.350 "io_queue_requests": 512, 00:21:29.350 "delay_cmd_submit": true, 00:21:29.350 "transport_retry_count": 4, 00:21:29.350 "bdev_retry_count": 3, 00:21:29.350 "transport_ack_timeout": 0, 00:21:29.350 "ctrlr_loss_timeout_sec": 0, 00:21:29.350 "reconnect_delay_sec": 0, 00:21:29.350 "fast_io_fail_timeout_sec": 0, 00:21:29.350 "disable_auto_failback": false, 00:21:29.350 "generate_uuids": false, 00:21:29.350 "transport_tos": 0, 00:21:29.350 "nvme_error_stat": false, 00:21:29.350 "rdma_srq_size": 0, 00:21:29.350 "io_path_stat": false, 00:21:29.350 "allow_accel_sequence": false, 00:21:29.350 "rdma_max_cq_size": 0, 00:21:29.350 "rdma_cm_event_timeout_ms": 0, 00:21:29.350 "dhchap_digests": [ 00:21:29.350 "sha256", 00:21:29.350 "sha384", 00:21:29.350 "sha512" 00:21:29.350 ], 00:21:29.350 "dhchap_dhgroups": [ 00:21:29.350 "null", 00:21:29.350 "ffdhe2048", 00:21:29.350 "ffdhe3072", 00:21:29.350 "ffdhe4096", 00:21:29.350 "ffdhe6144", 00:21:29.350 "ffdhe8192" 00:21:29.350 ] 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_nvme_attach_controller", 00:21:29.350 "params": { 00:21:29.350 "name": "nvme0", 00:21:29.350 "trtype": "TCP", 00:21:29.350 "adrfam": "IPv4", 00:21:29.350 "traddr": "10.0.0.2", 00:21:29.350 "trsvcid": "4420", 00:21:29.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.350 "prchk_reftag": false, 00:21:29.350 "prchk_guard": false, 00:21:29.350 "ctrlr_loss_timeout_sec": 0, 00:21:29.350 "reconnect_delay_sec": 0, 00:21:29.350 "fast_io_fail_timeout_sec": 0, 00:21:29.350 "psk": "key0", 00:21:29.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.350 "hdgst": false, 00:21:29.350 "ddgst": false 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_nvme_set_hotplug", 00:21:29.350 "params": { 00:21:29.350 "period_us": 100000, 00:21:29.350 "enable": false 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_enable_histogram", 00:21:29.350 "params": { 00:21:29.350 "name": "nvme0n1", 00:21:29.350 "enable": true 00:21:29.350 } 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "method": "bdev_wait_for_examine" 00:21:29.350 } 00:21:29.350 ] 00:21:29.350 }, 00:21:29.350 { 00:21:29.350 "subsystem": "nbd", 00:21:29.350 "config": [] 00:21:29.350 } 00:21:29.350 ] 00:21:29.350 }' 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 86949 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86949 ']' 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86949 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86949 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86949' 00:21:29.351 killing process with pid 86949 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86949 00:21:29.351 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.351 00:21:29.351 Latency(us) 00:21:29.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.351 =================================================================================================================== 00:21:29.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.351 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86949 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 86683 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86683 ']' 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86683 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86683 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86683' 00:21:29.609 killing process with pid 86683 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86683 00:21:29.609 20:19:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86683 00:21:29.868 20:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:29.868 20:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.868 20:19:55 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:29.868 "subsystems": [ 00:21:29.868 { 00:21:29.868 "subsystem": "keyring", 00:21:29.868 "config": [ 00:21:29.868 { 00:21:29.868 "method": "keyring_file_add_key", 00:21:29.868 "params": { 00:21:29.868 "name": "key0", 00:21:29.868 "path": "/tmp/tmp.NTnb15z3Uz" 00:21:29.868 } 00:21:29.868 } 00:21:29.868 ] 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "subsystem": "iobuf", 00:21:29.868 "config": [ 00:21:29.868 { 00:21:29.868 "method": "iobuf_set_options", 00:21:29.868 "params": { 00:21:29.868 "small_pool_count": 8192, 00:21:29.868 "large_pool_count": 1024, 00:21:29.868 "small_bufsize": 8192, 00:21:29.868 "large_bufsize": 135168 00:21:29.868 } 00:21:29.868 } 00:21:29.868 ] 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "subsystem": "sock", 00:21:29.868 "config": [ 00:21:29.868 { 00:21:29.868 "method": "sock_set_default_impl", 00:21:29.868 "params": { 00:21:29.868 "impl_name": "posix" 00:21:29.868 } 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "method": "sock_impl_set_options", 00:21:29.868 "params": { 00:21:29.868 "impl_name": "ssl", 00:21:29.868 "recv_buf_size": 4096, 00:21:29.868 "send_buf_size": 4096, 00:21:29.868 "enable_recv_pipe": true, 00:21:29.868 "enable_quickack": false, 00:21:29.868 "enable_placement_id": 0, 00:21:29.868 "enable_zerocopy_send_server": true, 00:21:29.868 "enable_zerocopy_send_client": false, 00:21:29.868 "zerocopy_threshold": 0, 00:21:29.868 "tls_version": 0, 00:21:29.868 "enable_ktls": false 00:21:29.868 } 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "method": "sock_impl_set_options", 00:21:29.868 "params": { 00:21:29.868 "impl_name": "posix", 00:21:29.868 "recv_buf_size": 2097152, 00:21:29.868 "send_buf_size": 2097152, 00:21:29.868 "enable_recv_pipe": true, 00:21:29.868 "enable_quickack": false, 00:21:29.868 "enable_placement_id": 0, 00:21:29.868 "enable_zerocopy_send_server": true, 00:21:29.868 "enable_zerocopy_send_client": false, 00:21:29.868 "zerocopy_threshold": 0, 00:21:29.868 "tls_version": 0, 00:21:29.868 "enable_ktls": false 00:21:29.868 } 00:21:29.868 } 00:21:29.868 ] 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "subsystem": "vmd", 00:21:29.868 "config": [] 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "subsystem": "accel", 00:21:29.868 "config": [ 00:21:29.868 { 00:21:29.868 "method": "accel_set_options", 00:21:29.868 "params": { 00:21:29.868 "small_cache_size": 128, 00:21:29.868 "large_cache_size": 16, 00:21:29.868 "task_count": 2048, 00:21:29.868 "sequence_count": 2048, 00:21:29.868 "buf_count": 2048 00:21:29.868 } 00:21:29.868 } 00:21:29.868 ] 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "subsystem": "bdev", 00:21:29.868 "config": [ 00:21:29.868 { 00:21:29.868 "method": "bdev_set_options", 00:21:29.868 "params": { 00:21:29.868 "bdev_io_pool_size": 65535, 00:21:29.868 "bdev_io_cache_size": 256, 00:21:29.868 "bdev_auto_examine": true, 00:21:29.868 "iobuf_small_cache_size": 128, 00:21:29.868 "iobuf_large_cache_size": 16 00:21:29.868 } 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "method": "bdev_raid_set_options", 00:21:29.868 "params": { 00:21:29.868 "process_window_size_kb": 1024 00:21:29.868 } 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "method": "bdev_iscsi_set_options", 00:21:29.868 "params": { 00:21:29.868 "timeout_sec": 30 00:21:29.868 } 00:21:29.868 }, 00:21:29.868 { 00:21:29.868 "method": "bdev_nvme_set_options", 00:21:29.868 "params": { 00:21:29.868 "action_on_timeout": "none", 00:21:29.868 "timeout_us": 0, 00:21:29.868 "timeout_admin_us": 0, 00:21:29.868 "keep_alive_timeout_ms": 10000, 00:21:29.868 "arbitration_burst": 0, 00:21:29.868 "low_priority_weight": 0, 00:21:29.868 "medium_priority_weight": 0, 00:21:29.868 "high_priority_weight": 0, 00:21:29.868 "nvme_adminq_poll_period_us": 10000, 00:21:29.868 "nvme_ioq_poll_period_us": 0, 00:21:29.868 "io_queue_requests": 0, 00:21:29.869 "delay_cmd_submit": true, 00:21:29.869 "transport_retry_count": 4, 00:21:29.869 "bdev_retry_count": 3, 00:21:29.869 "transport_ack_timeout": 0, 00:21:29.869 "ctrlr_loss_timeout_sec": 0, 00:21:29.869 "reconnect_delay_sec": 0, 00:21:29.869 "fast_io_fail_timeout_sec": 0, 00:21:29.869 "disable_auto_failback": false, 00:21:29.869 "generate_uuids": false, 00:21:29.869 "transport_tos": 0, 00:21:29.869 "nvme_error_stat": false, 00:21:29.869 "rdma_srq_size": 0, 00:21:29.869 "io_path_stat": false, 00:21:29.869 "allow_accel_sequence": false, 00:21:29.869 "rdma_max_cq_size": 0, 00:21:29.869 "rdma_cm_event_timeout_ms": 0, 00:21:29.869 "dhchap_digests": [ 00:21:29.869 "sha256", 00:21:29.869 "sha384", 00:21:29.869 "sha512" 00:21:29.869 ], 00:21:29.869 "dhchap_dhgroups": [ 00:21:29.869 "null", 00:21:29.869 "ffdhe2048", 00:21:29.869 "ffdhe3072", 00:21:29.869 "ffdhe4096", 00:21:29.869 "ffdhe6144", 00:21:29.869 "ffdhe8192" 00:21:29.869 ] 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "bdev_nvme_set_hotplug", 00:21:29.869 "params": { 00:21:29.869 "period_us": 100000, 00:21:29.869 "enable": false 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "bdev_malloc_create", 00:21:29.869 "params": { 00:21:29.869 "name": "malloc0", 00:21:29.869 "num_blocks": 8192, 00:21:29.869 "block_size": 4096, 00:21:29.869 "physical_block_size": 4096, 00:21:29.869 "uuid": "3a5776cd-c5fe-46d5-9c88-16deb73693ca", 00:21:29.869 "optimal_io_boundary": 0 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "bdev_wait_for_examine" 00:21:29.869 } 00:21:29.869 ] 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "subsystem": "nbd", 00:21:29.869 "config": [] 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "subsystem": "scheduler", 00:21:29.869 "config": [ 00:21:29.869 { 00:21:29.869 "method": "framework_set_scheduler", 00:21:29.869 "params": { 00:21:29.869 "name": "static" 00:21:29.869 } 00:21:29.869 } 00:21:29.869 ] 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "subsystem": "nvmf", 00:21:29.869 "config": [ 00:21:29.869 { 00:21:29.869 "method": "nvmf_set_config", 00:21:29.869 "params": { 00:21:29.869 "discovery_filter": "match_any", 00:21:29.869 "admin_cmd_passthru": { 00:21:29.869 "identify_ctrlr": false 00:21:29.869 } 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_set_max_subsystems", 00:21:29.869 "params": { 00:21:29.869 "max_subsystems": 1024 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_set_crdt", 00:21:29.869 "params": { 00:21:29.869 "crdt1": 0, 00:21:29.869 "crdt2": 0, 00:21:29.869 "crdt3": 0 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_create_transport", 00:21:29.869 "params": { 00:21:29.869 "trtype": "TCP", 00:21:29.869 "max_queue_depth": 128, 00:21:29.869 "max_io_qpairs_per_ctrlr": 127, 00:21:29.869 "in_capsule_data_size": 4096, 00:21:29.869 "max_io_size": 131072, 00:21:29.869 "io_unit_size": 131072, 00:21:29.869 "max_aq_depth": 128, 00:21:29.869 "num_shared_buffers": 511, 00:21:29.869 "buf_cache_size": 4294967295, 00:21:29.869 "dif_insert_or_strip": false, 00:21:29.869 "zcopy": false, 00:21:29.869 "c2h_success": false, 00:21:29.869 "sock_priority": 0, 00:21:29.869 "abort_timeout_sec": 1, 00:21:29.869 "ack_timeout": 0, 00:21:29.869 "data_wr_pool_size": 0 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_create_subsystem", 00:21:29.869 "params": { 00:21:29.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.869 00:21:29.869 "allow_any_host": false, 00:21:29.869 "serial_number": "00000000000000000000", 00:21:29.869 "model_number": "SPDK bdev Controller", 00:21:29.869 "max_namespaces": 32, 00:21:29.869 "min_cntlid": 1, 00:21:29.869 "max_cntlid": 65519, 00:21:29.869 "ana_reporting": false 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_subsystem_add_host", 00:21:29.869 "params": { 00:21:29.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.869 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.869 "psk": "key0" 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_subsystem_add_ns", 00:21:29.869 "params": { 00:21:29.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.869 "namespace": { 00:21:29.869 "nsid": 1, 00:21:29.869 "bdev_name": "malloc0", 00:21:29.869 "nguid": "3A5776CDC5FE46D59C8816DEB73693CA", 00:21:29.869 "uuid": "3a5776cd-c5fe-46d5-9c88-16deb73693ca", 00:21:29.869 "no_auto_visible": false 00:21:29.869 } 00:21:29.869 } 00:21:29.869 }, 00:21:29.869 { 00:21:29.869 "method": "nvmf_subsystem_add_listener", 00:21:29.869 "params": { 00:21:29.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.869 "listen_address": { 00:21:29.869 "trtype": "TCP", 00:21:29.869 "adrfam": "IPv4", 00:21:29.869 "traddr": "10.0.0.2", 00:21:29.869 "trsvcid": "4420" 00:21:29.869 }, 00:21:29.869 "secure_channel": false, 00:21:29.869 "sock_impl": "ssl" 00:21:29.869 } 00:21:29.869 } 00:21:29.869 ] 00:21:29.869 } 00:21:29.869 ] 00:21:29.869 }' 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=87490 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 87490 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87490 ']' 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.869 20:19:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 [2024-07-15 20:19:55.090078] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:29.869 [2024-07-15 20:19:55.090135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.869 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.869 [2024-07-15 20:19:55.175281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.128 [2024-07-15 20:19:55.264716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.128 [2024-07-15 20:19:55.264757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.128 [2024-07-15 20:19:55.264768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.128 [2024-07-15 20:19:55.264777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.128 [2024-07-15 20:19:55.264785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.128 [2024-07-15 20:19:55.264845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.386 [2024-07-15 20:19:55.484233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.386 [2024-07-15 20:19:55.516243] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.386 [2024-07-15 20:19:55.530573] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=87768 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 87768 /var/tmp/bdevperf.sock 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 87768 ']' 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.953 20:19:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:30.953 "subsystems": [ 00:21:30.953 { 00:21:30.953 "subsystem": "keyring", 00:21:30.953 "config": [ 00:21:30.953 { 00:21:30.953 "method": "keyring_file_add_key", 00:21:30.953 "params": { 00:21:30.953 "name": "key0", 00:21:30.953 "path": "/tmp/tmp.NTnb15z3Uz" 00:21:30.953 } 00:21:30.953 } 00:21:30.953 ] 00:21:30.953 }, 00:21:30.953 { 00:21:30.953 "subsystem": "iobuf", 00:21:30.953 "config": [ 00:21:30.954 { 00:21:30.954 "method": "iobuf_set_options", 00:21:30.954 "params": { 00:21:30.954 "small_pool_count": 8192, 00:21:30.954 "large_pool_count": 1024, 00:21:30.954 "small_bufsize": 8192, 00:21:30.954 "large_bufsize": 135168 00:21:30.954 } 00:21:30.954 } 00:21:30.954 ] 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "subsystem": "sock", 00:21:30.954 "config": [ 00:21:30.954 { 00:21:30.954 "method": "sock_set_default_impl", 00:21:30.954 "params": { 00:21:30.954 "impl_name": "posix" 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "sock_impl_set_options", 00:21:30.954 "params": { 00:21:30.954 "impl_name": "ssl", 00:21:30.954 "recv_buf_size": 4096, 00:21:30.954 "send_buf_size": 4096, 00:21:30.954 "enable_recv_pipe": true, 00:21:30.954 "enable_quickack": false, 00:21:30.954 "enable_placement_id": 0, 00:21:30.954 "enable_zerocopy_send_server": true, 00:21:30.954 "enable_zerocopy_send_client": false, 00:21:30.954 "zerocopy_threshold": 0, 00:21:30.954 "tls_version": 0, 00:21:30.954 "enable_ktls": false 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "sock_impl_set_options", 00:21:30.954 "params": { 00:21:30.954 "impl_name": "posix", 00:21:30.954 "recv_buf_size": 2097152, 00:21:30.954 "send_buf_size": 2097152, 00:21:30.954 "enable_recv_pipe": true, 00:21:30.954 "enable_quickack": false, 00:21:30.954 "enable_placement_id": 0, 00:21:30.954 "enable_zerocopy_send_server": true, 00:21:30.954 "enable_zerocopy_send_client": false, 00:21:30.954 "zerocopy_threshold": 0, 00:21:30.954 "tls_version": 0, 00:21:30.954 "enable_ktls": false 00:21:30.954 } 00:21:30.954 } 00:21:30.954 ] 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "subsystem": "vmd", 00:21:30.954 "config": [] 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "subsystem": "accel", 00:21:30.954 "config": [ 00:21:30.954 { 00:21:30.954 "method": "accel_set_options", 00:21:30.954 "params": { 00:21:30.954 "small_cache_size": 128, 00:21:30.954 "large_cache_size": 16, 00:21:30.954 "task_count": 2048, 00:21:30.954 "sequence_count": 2048, 00:21:30.954 "buf_count": 2048 00:21:30.954 } 00:21:30.954 } 00:21:30.954 ] 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "subsystem": "bdev", 00:21:30.954 "config": [ 00:21:30.954 { 00:21:30.954 "method": "bdev_set_options", 00:21:30.954 "params": { 00:21:30.954 "bdev_io_pool_size": 65535, 00:21:30.954 "bdev_io_cache_size": 256, 00:21:30.954 "bdev_auto_examine": true, 00:21:30.954 "iobuf_small_cache_size": 128, 00:21:30.954 "iobuf_large_cache_size": 16 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_raid_set_options", 00:21:30.954 "params": { 00:21:30.954 "process_window_size_kb": 1024 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_iscsi_set_options", 00:21:30.954 "params": { 00:21:30.954 "timeout_sec": 30 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_nvme_set_options", 00:21:30.954 "params": { 00:21:30.954 "action_on_timeout": "none", 00:21:30.954 "timeout_us": 0, 00:21:30.954 "timeout_admin_us": 0, 00:21:30.954 "keep_alive_timeout_ms": 10000, 00:21:30.954 "arbitration_burst": 0, 00:21:30.954 "low_priority_weight": 0, 00:21:30.954 "medium_priority_weight": 0, 00:21:30.954 "high_priority_weight": 0, 00:21:30.954 "nvme_adminq_poll_period_us": 10000, 00:21:30.954 "nvme_ioq_poll_period_us": 0, 00:21:30.954 "io_queue_requests": 512, 00:21:30.954 "delay_cmd_submit": true, 00:21:30.954 "transport_retry_count": 4, 00:21:30.954 "bdev_retry_count": 3, 00:21:30.954 "transport_ack_timeout": 0, 00:21:30.954 "ctrlr_loss_timeout_sec": 0, 00:21:30.954 "reconnect_delay_sec": 0, 00:21:30.954 "fast_io_fail_timeout_sec": 0, 00:21:30.954 "disable_auto_failback": false, 00:21:30.954 "generate_uuids": false, 00:21:30.954 "transport_tos": 0, 00:21:30.954 "nvme_error_stat": false, 00:21:30.954 "rdma_srq_size": 0, 00:21:30.954 "io_path_stat": false, 00:21:30.954 "allow_accel_sequence": false, 00:21:30.954 "rdma_max_cq_size": 0, 00:21:30.954 "rdma_cm_event_timeout_ms": 0, 00:21:30.954 "dhchap_digests": [ 00:21:30.954 "sha256", 00:21:30.954 "sha384", 00:21:30.954 "sha512" 00:21:30.954 ], 00:21:30.954 "dhchap_dhgroups": [ 00:21:30.954 "null", 00:21:30.954 "ffdhe2048", 00:21:30.954 "ffdhe3072", 00:21:30.954 "ffdhe4096", 00:21:30.954 "ffdhe6144", 00:21:30.954 "ffdhe8192" 00:21:30.954 ] 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_nvme_attach_controller", 00:21:30.954 "params": { 00:21:30.954 "name": "nvme0", 00:21:30.954 "trtype": "TCP", 00:21:30.954 "adrfam": "IPv4", 00:21:30.954 "traddr": "10.0.0.2", 00:21:30.954 "trsvcid": "4420", 00:21:30.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.954 "prchk_reftag": false, 00:21:30.954 "prchk_guard": false, 00:21:30.954 "ctrlr_loss_timeout_sec": 0, 00:21:30.954 "reconnect_delay_sec": 0, 00:21:30.954 "fast_io_fail_timeout_sec": 0, 00:21:30.954 "psk": "key0", 00:21:30.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.954 "hdgst": false, 00:21:30.954 "ddgst": false 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_nvme_set_hotplug", 00:21:30.954 "params": { 00:21:30.954 "period_us": 100000, 00:21:30.954 "enable": false 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_enable_histogram", 00:21:30.954 "params": { 00:21:30.954 "name": "nvme0n1", 00:21:30.954 "enable": true 00:21:30.954 } 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "method": "bdev_wait_for_examine" 00:21:30.954 } 00:21:30.954 ] 00:21:30.954 }, 00:21:30.954 { 00:21:30.954 "subsystem": "nbd", 00:21:30.954 "config": [] 00:21:30.954 } 00:21:30.954 ] 00:21:30.954 }' 00:21:30.954 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.954 20:19:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.954 [2024-07-15 20:19:56.120117] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:30.954 [2024-07-15 20:19:56.120174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87768 ] 00:21:30.954 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.954 [2024-07-15 20:19:56.190742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.954 [2024-07-15 20:19:56.281400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.212 [2024-07-15 20:19:56.438387] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.780 20:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.780 20:19:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:31.780 20:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.780 20:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:32.039 20:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.039 20:19:57 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.297 Running I/O for 1 seconds... 00:21:33.250 00:21:33.250 Latency(us) 00:21:33.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.250 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.250 Verification LBA range: start 0x0 length 0x2000 00:21:33.250 nvme0n1 : 1.02 3783.00 14.78 0.00 0.00 33493.77 8698.41 39798.23 00:21:33.250 =================================================================================================================== 00:21:33.250 Total : 3783.00 14.78 0.00 0.00 33493.77 8698.41 39798.23 00:21:33.250 0 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:33.250 nvmf_trace.0 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 87768 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87768 ']' 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87768 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.250 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87768 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87768' 00:21:33.572 killing process with pid 87768 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87768 00:21:33.572 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.572 00:21:33.572 Latency(us) 00:21:33.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.572 =================================================================================================================== 00:21:33.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87768 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.572 rmmod nvme_tcp 00:21:33.572 rmmod nvme_fabrics 00:21:33.572 rmmod nvme_keyring 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 87490 ']' 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 87490 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 87490 ']' 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 87490 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.572 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87490 00:21:33.866 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:33.866 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:33.866 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87490' 00:21:33.866 killing process with pid 87490 00:21:33.866 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 87490 00:21:33.866 20:19:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 87490 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.866 20:19:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.404 20:20:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.404 20:20:01 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.dda7ceQAAd /tmp/tmp.t8uIs7MbYp /tmp/tmp.NTnb15z3Uz 00:21:36.404 00:21:36.404 real 1m22.701s 00:21:36.404 user 2m8.116s 00:21:36.404 sys 0m29.700s 00:21:36.404 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:36.404 20:20:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.404 ************************************ 00:21:36.404 END TEST nvmf_tls 00:21:36.404 ************************************ 00:21:36.404 20:20:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:36.404 20:20:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:36.404 20:20:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:36.404 20:20:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.404 20:20:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.404 ************************************ 00:21:36.404 START TEST nvmf_fips 00:21:36.404 ************************************ 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:36.404 * Looking for test storage... 00:21:36.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.404 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:36.405 Error setting digest 00:21:36.405 0092F813117F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:36.405 0092F813117F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.405 20:20:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:41.684 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.684 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:41.685 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:41.685 Found net devices under 0000:af:00.0: cvl_0_0 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:41.685 Found net devices under 0000:af:00.1: cvl_0_1 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:21:41.685 00:21:41.685 --- 10.0.0.2 ping statistics --- 00:21:41.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.685 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:21:41.685 00:21:41.685 --- 10.0.0.1 ping statistics --- 00:21:41.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.685 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.685 20:20:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=91810 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 91810 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 91810 ']' 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:41.685 20:20:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.945 [2024-07-15 20:20:07.100961] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:41.945 [2024-07-15 20:20:07.101019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.945 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.945 [2024-07-15 20:20:07.176662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.945 [2024-07-15 20:20:07.267016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.945 [2024-07-15 20:20:07.267055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.945 [2024-07-15 20:20:07.267065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.945 [2024-07-15 20:20:07.267074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.945 [2024-07-15 20:20:07.267081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.945 [2024-07-15 20:20:07.267102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.882 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.882 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:42.882 20:20:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.882 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.882 20:20:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.882 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.142 [2024-07-15 20:20:08.264141] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.142 [2024-07-15 20:20:08.280131] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.142 [2024-07-15 20:20:08.280307] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.142 [2024-07-15 20:20:08.309449] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.142 malloc0 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=92087 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 92087 /var/tmp/bdevperf.sock 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 92087 ']' 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.142 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.142 [2024-07-15 20:20:08.403690] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:21:43.142 [2024-07-15 20:20:08.403751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92087 ] 00:21:43.142 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.142 [2024-07-15 20:20:08.460989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.401 [2024-07-15 20:20:08.528591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.401 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.401 20:20:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:43.401 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:43.659 [2024-07-15 20:20:08.839911] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.659 [2024-07-15 20:20:08.839995] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:43.659 TLSTESTn1 00:21:43.659 20:20:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.918 Running I/O for 10 seconds... 00:21:53.904 00:21:53.904 Latency(us) 00:21:53.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.904 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:53.904 Verification LBA range: start 0x0 length 0x2000 00:21:53.904 TLSTESTn1 : 10.02 3802.75 14.85 0.00 0.00 33612.25 6404.65 45279.42 00:21:53.905 =================================================================================================================== 00:21:53.905 Total : 3802.75 14.85 0.00 0.00 33612.25 6404.65 45279.42 00:21:53.905 0 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:53.905 nvmf_trace.0 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 92087 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 92087 ']' 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 92087 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92087 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92087' 00:21:53.905 killing process with pid 92087 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 92087 00:21:53.905 Received shutdown signal, test time was about 10.000000 seconds 00:21:53.905 00:21:53.905 Latency(us) 00:21:53.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.905 =================================================================================================================== 00:21:53.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.905 [2024-07-15 20:20:19.245021] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:53.905 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 92087 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.164 rmmod nvme_tcp 00:21:54.164 rmmod nvme_fabrics 00:21:54.164 rmmod nvme_keyring 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 91810 ']' 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 91810 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 91810 ']' 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 91810 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.164 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91810 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91810' 00:21:54.424 killing process with pid 91810 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 91810 00:21:54.424 [2024-07-15 20:20:19.522600] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 91810 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.424 20:20:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.962 20:20:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.962 20:20:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.962 00:21:56.962 real 0m20.523s 00:21:56.962 user 0m21.575s 00:21:56.962 sys 0m9.463s 00:21:56.962 20:20:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.962 20:20:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.962 ************************************ 00:21:56.962 END TEST nvmf_fips 00:21:56.962 ************************************ 00:21:56.962 20:20:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:56.962 20:20:21 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:56.962 20:20:21 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:56.962 20:20:21 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:56.962 20:20:21 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:56.962 20:20:21 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.962 20:20:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:02.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:02.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:02.237 Found net devices under 0000:af:00.0: cvl_0_0 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:02.237 Found net devices under 0000:af:00.1: cvl_0_1 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:02.237 20:20:27 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:02.237 20:20:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:02.237 20:20:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.237 20:20:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:02.237 ************************************ 00:22:02.237 START TEST nvmf_perf_adq 00:22:02.237 ************************************ 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:02.237 * Looking for test storage... 00:22:02.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.237 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.238 20:20:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.510 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:07.511 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:07.511 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:07.511 Found net devices under 0000:af:00.0: cvl_0_0 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:07.511 Found net devices under 0000:af:00.1: cvl_0_1 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:07.511 20:20:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:08.886 20:20:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:10.784 20:20:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:16.058 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:16.058 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:16.058 Found net devices under 0000:af:00.0: cvl_0_0 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.058 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:16.059 Found net devices under 0000:af:00.1: cvl_0_1 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.059 20:20:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:22:16.059 00:22:16.059 --- 10.0.0.2 ping statistics --- 00:22:16.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.059 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:16.059 00:22:16.059 --- 10.0.0.1 ping statistics --- 00:22:16.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.059 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=102315 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 102315 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 102315 ']' 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.059 20:20:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.059 [2024-07-15 20:20:41.221046] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:22:16.059 [2024-07-15 20:20:41.221103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.059 [2024-07-15 20:20:41.307492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.059 [2024-07-15 20:20:41.400244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.059 [2024-07-15 20:20:41.400296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.059 [2024-07-15 20:20:41.400307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.059 [2024-07-15 20:20:41.400316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.059 [2024-07-15 20:20:41.400324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.059 [2024-07-15 20:20:41.400366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.059 [2024-07-15 20:20:41.400468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.059 [2024-07-15 20:20:41.400496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.059 [2024-07-15 20:20:41.400498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.995 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 [2024-07-15 20:20:42.377275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 Malloc1 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.254 [2024-07-15 20:20:42.433244] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=102600 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:17.254 20:20:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:17.254 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:19.157 "tick_rate": 2200000000, 00:22:19.157 "poll_groups": [ 00:22:19.157 { 00:22:19.157 "name": "nvmf_tgt_poll_group_000", 00:22:19.157 "admin_qpairs": 1, 00:22:19.157 "io_qpairs": 1, 00:22:19.157 "current_admin_qpairs": 1, 00:22:19.157 "current_io_qpairs": 1, 00:22:19.157 "pending_bdev_io": 0, 00:22:19.157 "completed_nvme_io": 16055, 00:22:19.157 "transports": [ 00:22:19.157 { 00:22:19.157 "trtype": "TCP" 00:22:19.157 } 00:22:19.157 ] 00:22:19.157 }, 00:22:19.157 { 00:22:19.157 "name": "nvmf_tgt_poll_group_001", 00:22:19.157 "admin_qpairs": 0, 00:22:19.157 "io_qpairs": 1, 00:22:19.157 "current_admin_qpairs": 0, 00:22:19.157 "current_io_qpairs": 1, 00:22:19.157 "pending_bdev_io": 0, 00:22:19.157 "completed_nvme_io": 20305, 00:22:19.157 "transports": [ 00:22:19.157 { 00:22:19.157 "trtype": "TCP" 00:22:19.157 } 00:22:19.157 ] 00:22:19.157 }, 00:22:19.157 { 00:22:19.157 "name": "nvmf_tgt_poll_group_002", 00:22:19.157 "admin_qpairs": 0, 00:22:19.157 "io_qpairs": 1, 00:22:19.157 "current_admin_qpairs": 0, 00:22:19.157 "current_io_qpairs": 1, 00:22:19.157 "pending_bdev_io": 0, 00:22:19.157 "completed_nvme_io": 16066, 00:22:19.157 "transports": [ 00:22:19.157 { 00:22:19.157 "trtype": "TCP" 00:22:19.157 } 00:22:19.157 ] 00:22:19.157 }, 00:22:19.157 { 00:22:19.157 "name": "nvmf_tgt_poll_group_003", 00:22:19.157 "admin_qpairs": 0, 00:22:19.157 "io_qpairs": 1, 00:22:19.157 "current_admin_qpairs": 0, 00:22:19.157 "current_io_qpairs": 1, 00:22:19.157 "pending_bdev_io": 0, 00:22:19.157 "completed_nvme_io": 15448, 00:22:19.157 "transports": [ 00:22:19.157 { 00:22:19.157 "trtype": "TCP" 00:22:19.157 } 00:22:19.157 ] 00:22:19.157 } 00:22:19.157 ] 00:22:19.157 }' 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:19.157 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:19.416 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:19.416 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:19.416 20:20:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 102600 00:22:27.536 Initializing NVMe Controllers 00:22:27.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.537 Initialization complete. Launching workers. 00:22:27.537 ======================================================== 00:22:27.537 Latency(us) 00:22:27.537 Device Information : IOPS MiB/s Average min max 00:22:27.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8061.29 31.49 7940.94 3267.44 14716.99 00:22:27.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10690.28 41.76 5987.61 1978.55 9332.60 00:22:27.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8568.47 33.47 7470.87 2343.75 15288.45 00:22:27.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8546.97 33.39 7487.25 1852.17 15121.97 00:22:27.537 ======================================================== 00:22:27.537 Total : 35867.00 140.11 7138.33 1852.17 15288.45 00:22:27.537 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.537 rmmod nvme_tcp 00:22:27.537 rmmod nvme_fabrics 00:22:27.537 rmmod nvme_keyring 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 102315 ']' 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 102315 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 102315 ']' 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 102315 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 102315 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 102315' 00:22:27.537 killing process with pid 102315 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 102315 00:22:27.537 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 102315 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.796 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.723 20:20:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.723 20:20:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:29.723 20:20:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.193 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.096 20:20:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:38.362 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:38.362 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:38.362 Found net devices under 0000:af:00.0: cvl_0_0 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:38.362 Found net devices under 0000:af:00.1: cvl_0_1 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.362 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:22:38.363 00:22:38.363 --- 10.0.0.2 ping statistics --- 00:22:38.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.363 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:38.363 00:22:38.363 --- 10.0.0.1 ping statistics --- 00:22:38.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.363 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:38.363 net.core.busy_poll = 1 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:38.363 net.core.busy_read = 1 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:38.363 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=106770 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 106770 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 106770 ']' 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.621 20:21:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.621 [2024-07-15 20:21:03.858579] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:22:38.621 [2024-07-15 20:21:03.858640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.621 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.621 [2024-07-15 20:21:03.946802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.879 [2024-07-15 20:21:04.040437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.879 [2024-07-15 20:21:04.040479] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.879 [2024-07-15 20:21:04.040488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.879 [2024-07-15 20:21:04.040496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.879 [2024-07-15 20:21:04.040504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.879 [2024-07-15 20:21:04.040549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.879 [2024-07-15 20:21:04.040572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.879 [2024-07-15 20:21:04.040668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.879 [2024-07-15 20:21:04.040671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 [2024-07-15 20:21:04.986361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 Malloc1 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.812 [2024-07-15 20:21:05.033995] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=107055 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:39.812 20:21:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.812 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:41.719 "tick_rate": 2200000000, 00:22:41.719 "poll_groups": [ 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_000", 00:22:41.719 "admin_qpairs": 1, 00:22:41.719 "io_qpairs": 1, 00:22:41.719 "current_admin_qpairs": 1, 00:22:41.719 "current_io_qpairs": 1, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 15233, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }, 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_001", 00:22:41.719 "admin_qpairs": 0, 00:22:41.719 "io_qpairs": 3, 00:22:41.719 "current_admin_qpairs": 0, 00:22:41.719 "current_io_qpairs": 3, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 35846, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }, 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_002", 00:22:41.719 "admin_qpairs": 0, 00:22:41.719 "io_qpairs": 0, 00:22:41.719 "current_admin_qpairs": 0, 00:22:41.719 "current_io_qpairs": 0, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 0, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }, 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_003", 00:22:41.719 "admin_qpairs": 0, 00:22:41.719 "io_qpairs": 0, 00:22:41.719 "current_admin_qpairs": 0, 00:22:41.719 "current_io_qpairs": 0, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 0, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }' 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:41.719 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:41.980 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:41.980 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:41.980 20:21:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 107055 00:22:50.084 Initializing NVMe Controllers 00:22:50.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.084 Initialization complete. Launching workers. 00:22:50.084 ======================================================== 00:22:50.084 Latency(us) 00:22:50.084 Device Information : IOPS MiB/s Average min max 00:22:50.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8106.50 31.67 7893.78 2714.54 49276.79 00:22:50.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6652.07 25.98 9620.25 1645.07 56940.83 00:22:50.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6511.29 25.43 9830.56 1464.53 55564.38 00:22:50.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5767.93 22.53 11093.58 1703.14 56128.35 00:22:50.084 ======================================================== 00:22:50.084 Total : 27037.78 105.62 9467.57 1464.53 56940.83 00:22:50.084 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.084 rmmod nvme_tcp 00:22:50.084 rmmod nvme_fabrics 00:22:50.084 rmmod nvme_keyring 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 106770 ']' 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 106770 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 106770 ']' 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 106770 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 106770 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 106770' 00:22:50.084 killing process with pid 106770 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 106770 00:22:50.084 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 106770 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.343 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.633 20:21:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.633 20:21:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:53.633 00:22:53.633 real 0m51.529s 00:22:53.633 user 2m50.500s 00:22:53.633 sys 0m9.690s 00:22:53.633 20:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.633 20:21:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.633 ************************************ 00:22:53.633 END TEST nvmf_perf_adq 00:22:53.633 ************************************ 00:22:53.633 20:21:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:53.633 20:21:18 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.633 20:21:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.633 20:21:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.633 20:21:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.633 ************************************ 00:22:53.633 START TEST nvmf_shutdown 00:22:53.633 ************************************ 00:22:53.633 20:21:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.633 * Looking for test storage... 00:22:53.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.633 20:21:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.633 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:53.633 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.633 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.634 ************************************ 00:22:53.634 START TEST nvmf_shutdown_tc1 00:22:53.634 ************************************ 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.634 20:21:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:58.903 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:58.903 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.903 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:58.904 Found net devices under 0000:af:00.0: cvl_0_0 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:58.904 Found net devices under 0000:af:00.1: cvl_0_1 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.904 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:59.162 00:22:59.162 --- 10.0.0.2 ping statistics --- 00:22:59.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.162 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:22:59.162 00:22:59.162 --- 10.0.0.1 ping statistics --- 00:22:59.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.162 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.162 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=113093 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 113093 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 113093 ']' 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.420 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.421 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.421 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.421 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.421 [2024-07-15 20:21:24.588997] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:22:59.421 [2024-07-15 20:21:24.589051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.421 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.421 [2024-07-15 20:21:24.665361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.421 [2024-07-15 20:21:24.757790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.421 [2024-07-15 20:21:24.757833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.421 [2024-07-15 20:21:24.757843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.421 [2024-07-15 20:21:24.757852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.421 [2024-07-15 20:21:24.757859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.421 [2024-07-15 20:21:24.757919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.421 [2024-07-15 20:21:24.758014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.421 [2024-07-15 20:21:24.758127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.421 [2024-07-15 20:21:24.758127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.680 [2024-07-15 20:21:24.909584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.680 20:21:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.680 Malloc1 00:22:59.680 [2024-07-15 20:21:25.009975] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.939 Malloc2 00:22:59.939 Malloc3 00:22:59.939 Malloc4 00:22:59.939 Malloc5 00:22:59.939 Malloc6 00:22:59.939 Malloc7 00:23:00.198 Malloc8 00:23:00.198 Malloc9 00:23:00.198 Malloc10 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=113162 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 113162 /var/tmp/bdevperf.sock 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 113162 ']' 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.198 { 00:23:00.198 "params": { 00:23:00.198 "name": "Nvme$subsystem", 00:23:00.198 "trtype": "$TEST_TRANSPORT", 00:23:00.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.198 "adrfam": "ipv4", 00:23:00.198 "trsvcid": "$NVMF_PORT", 00:23:00.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.198 "hdgst": ${hdgst:-false}, 00:23:00.198 "ddgst": ${ddgst:-false} 00:23:00.198 }, 00:23:00.198 "method": "bdev_nvme_attach_controller" 00:23:00.198 } 00:23:00.198 EOF 00:23:00.198 )") 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.198 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.198 { 00:23:00.198 "params": { 00:23:00.198 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 [2024-07-15 20:21:25.499342] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:00.199 [2024-07-15 20:21:25.499403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.199 { 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme$subsystem", 00:23:00.199 "trtype": "$TEST_TRANSPORT", 00:23:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "$NVMF_PORT", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.199 "hdgst": ${hdgst:-false}, 00:23:00.199 "ddgst": ${ddgst:-false} 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 } 00:23:00.199 EOF 00:23:00.199 )") 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:00.199 20:21:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme1", 00:23:00.199 "trtype": "tcp", 00:23:00.199 "traddr": "10.0.0.2", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "4420", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.199 "hdgst": false, 00:23:00.199 "ddgst": false 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 },{ 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme2", 00:23:00.199 "trtype": "tcp", 00:23:00.199 "traddr": "10.0.0.2", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "4420", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.199 "hdgst": false, 00:23:00.199 "ddgst": false 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 },{ 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme3", 00:23:00.199 "trtype": "tcp", 00:23:00.199 "traddr": "10.0.0.2", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "4420", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.199 "hdgst": false, 00:23:00.199 "ddgst": false 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 },{ 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme4", 00:23:00.199 "trtype": "tcp", 00:23:00.199 "traddr": "10.0.0.2", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "4420", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.199 "hdgst": false, 00:23:00.199 "ddgst": false 00:23:00.199 }, 00:23:00.199 "method": "bdev_nvme_attach_controller" 00:23:00.199 },{ 00:23:00.199 "params": { 00:23:00.199 "name": "Nvme5", 00:23:00.199 "trtype": "tcp", 00:23:00.199 "traddr": "10.0.0.2", 00:23:00.199 "adrfam": "ipv4", 00:23:00.199 "trsvcid": "4420", 00:23:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.199 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.200 "hdgst": false, 00:23:00.200 "ddgst": false 00:23:00.200 }, 00:23:00.200 "method": "bdev_nvme_attach_controller" 00:23:00.200 },{ 00:23:00.200 "params": { 00:23:00.200 "name": "Nvme6", 00:23:00.200 "trtype": "tcp", 00:23:00.200 "traddr": "10.0.0.2", 00:23:00.200 "adrfam": "ipv4", 00:23:00.200 "trsvcid": "4420", 00:23:00.200 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.200 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.200 "hdgst": false, 00:23:00.200 "ddgst": false 00:23:00.200 }, 00:23:00.200 "method": "bdev_nvme_attach_controller" 00:23:00.200 },{ 00:23:00.200 "params": { 00:23:00.200 "name": "Nvme7", 00:23:00.200 "trtype": "tcp", 00:23:00.200 "traddr": "10.0.0.2", 00:23:00.200 "adrfam": "ipv4", 00:23:00.200 "trsvcid": "4420", 00:23:00.200 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.200 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.200 "hdgst": false, 00:23:00.200 "ddgst": false 00:23:00.200 }, 00:23:00.200 "method": "bdev_nvme_attach_controller" 00:23:00.200 },{ 00:23:00.200 "params": { 00:23:00.200 "name": "Nvme8", 00:23:00.200 "trtype": "tcp", 00:23:00.200 "traddr": "10.0.0.2", 00:23:00.200 "adrfam": "ipv4", 00:23:00.200 "trsvcid": "4420", 00:23:00.200 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.200 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.200 "hdgst": false, 00:23:00.200 "ddgst": false 00:23:00.200 }, 00:23:00.200 "method": "bdev_nvme_attach_controller" 00:23:00.200 },{ 00:23:00.200 "params": { 00:23:00.200 "name": "Nvme9", 00:23:00.200 "trtype": "tcp", 00:23:00.200 "traddr": "10.0.0.2", 00:23:00.200 "adrfam": "ipv4", 00:23:00.200 "trsvcid": "4420", 00:23:00.200 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.200 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.200 "hdgst": false, 00:23:00.200 "ddgst": false 00:23:00.200 }, 00:23:00.200 "method": "bdev_nvme_attach_controller" 00:23:00.200 },{ 00:23:00.200 "params": { 00:23:00.200 "name": "Nvme10", 00:23:00.200 "trtype": "tcp", 00:23:00.200 "traddr": "10.0.0.2", 00:23:00.200 "adrfam": "ipv4", 00:23:00.200 "trsvcid": "4420", 00:23:00.200 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.200 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.200 "hdgst": false, 00:23:00.200 "ddgst": false 00:23:00.200 }, 00:23:00.200 "method": "bdev_nvme_attach_controller" 00:23:00.200 }' 00:23:00.200 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.458 [2024-07-15 20:21:25.583815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.458 [2024-07-15 20:21:25.671376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 113162 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:01.827 20:21:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:02.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 113162 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 113093 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.760 { 00:23:02.760 "params": { 00:23:02.760 "name": "Nvme$subsystem", 00:23:02.760 "trtype": "$TEST_TRANSPORT", 00:23:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.760 "adrfam": "ipv4", 00:23:02.760 "trsvcid": "$NVMF_PORT", 00:23:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.760 "hdgst": ${hdgst:-false}, 00:23:02.760 "ddgst": ${ddgst:-false} 00:23:02.760 }, 00:23:02.760 "method": "bdev_nvme_attach_controller" 00:23:02.760 } 00:23:02.760 EOF 00:23:02.760 )") 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.760 { 00:23:02.760 "params": { 00:23:02.760 "name": "Nvme$subsystem", 00:23:02.760 "trtype": "$TEST_TRANSPORT", 00:23:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.760 "adrfam": "ipv4", 00:23:02.760 "trsvcid": "$NVMF_PORT", 00:23:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.760 "hdgst": ${hdgst:-false}, 00:23:02.760 "ddgst": ${ddgst:-false} 00:23:02.760 }, 00:23:02.760 "method": "bdev_nvme_attach_controller" 00:23:02.760 } 00:23:02.760 EOF 00:23:02.760 )") 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.760 { 00:23:02.760 "params": { 00:23:02.760 "name": "Nvme$subsystem", 00:23:02.760 "trtype": "$TEST_TRANSPORT", 00:23:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.760 "adrfam": "ipv4", 00:23:02.760 "trsvcid": "$NVMF_PORT", 00:23:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.760 "hdgst": ${hdgst:-false}, 00:23:02.760 "ddgst": ${ddgst:-false} 00:23:02.760 }, 00:23:02.760 "method": "bdev_nvme_attach_controller" 00:23:02.760 } 00:23:02.760 EOF 00:23:02.760 )") 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.760 { 00:23:02.760 "params": { 00:23:02.760 "name": "Nvme$subsystem", 00:23:02.760 "trtype": "$TEST_TRANSPORT", 00:23:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.760 "adrfam": "ipv4", 00:23:02.760 "trsvcid": "$NVMF_PORT", 00:23:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.760 "hdgst": ${hdgst:-false}, 00:23:02.760 "ddgst": ${ddgst:-false} 00:23:02.760 }, 00:23:02.760 "method": "bdev_nvme_attach_controller" 00:23:02.760 } 00:23:02.760 EOF 00:23:02.760 )") 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.760 { 00:23:02.760 "params": { 00:23:02.760 "name": "Nvme$subsystem", 00:23:02.760 "trtype": "$TEST_TRANSPORT", 00:23:02.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.760 "adrfam": "ipv4", 00:23:02.760 "trsvcid": "$NVMF_PORT", 00:23:02.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.760 "hdgst": ${hdgst:-false}, 00:23:02.760 "ddgst": ${ddgst:-false} 00:23:02.760 }, 00:23:02.760 "method": "bdev_nvme_attach_controller" 00:23:02.760 } 00:23:02.760 EOF 00:23:02.760 )") 00:23:02.760 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.018 { 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme$subsystem", 00:23:03.018 "trtype": "$TEST_TRANSPORT", 00:23:03.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "$NVMF_PORT", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.018 "hdgst": ${hdgst:-false}, 00:23:03.018 "ddgst": ${ddgst:-false} 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 } 00:23:03.018 EOF 00:23:03.018 )") 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.018 { 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme$subsystem", 00:23:03.018 "trtype": "$TEST_TRANSPORT", 00:23:03.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "$NVMF_PORT", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.018 "hdgst": ${hdgst:-false}, 00:23:03.018 "ddgst": ${ddgst:-false} 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 } 00:23:03.018 EOF 00:23:03.018 )") 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.018 [2024-07-15 20:21:28.125741] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:03.018 [2024-07-15 20:21:28.125803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113707 ] 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.018 { 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme$subsystem", 00:23:03.018 "trtype": "$TEST_TRANSPORT", 00:23:03.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "$NVMF_PORT", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.018 "hdgst": ${hdgst:-false}, 00:23:03.018 "ddgst": ${ddgst:-false} 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 } 00:23:03.018 EOF 00:23:03.018 )") 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.018 { 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme$subsystem", 00:23:03.018 "trtype": "$TEST_TRANSPORT", 00:23:03.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "$NVMF_PORT", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.018 "hdgst": ${hdgst:-false}, 00:23:03.018 "ddgst": ${ddgst:-false} 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 } 00:23:03.018 EOF 00:23:03.018 )") 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.018 { 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme$subsystem", 00:23:03.018 "trtype": "$TEST_TRANSPORT", 00:23:03.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "$NVMF_PORT", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.018 "hdgst": ${hdgst:-false}, 00:23:03.018 "ddgst": ${ddgst:-false} 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 } 00:23:03.018 EOF 00:23:03.018 )") 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:03.018 20:21:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme1", 00:23:03.018 "trtype": "tcp", 00:23:03.018 "traddr": "10.0.0.2", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "4420", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.018 "hdgst": false, 00:23:03.018 "ddgst": false 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 },{ 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme2", 00:23:03.018 "trtype": "tcp", 00:23:03.018 "traddr": "10.0.0.2", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "4420", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.018 "hdgst": false, 00:23:03.018 "ddgst": false 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 },{ 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme3", 00:23:03.018 "trtype": "tcp", 00:23:03.018 "traddr": "10.0.0.2", 00:23:03.018 "adrfam": "ipv4", 00:23:03.018 "trsvcid": "4420", 00:23:03.018 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.018 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.018 "hdgst": false, 00:23:03.018 "ddgst": false 00:23:03.018 }, 00:23:03.018 "method": "bdev_nvme_attach_controller" 00:23:03.018 },{ 00:23:03.018 "params": { 00:23:03.018 "name": "Nvme4", 00:23:03.018 "trtype": "tcp", 00:23:03.018 "traddr": "10.0.0.2", 00:23:03.018 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 },{ 00:23:03.019 "params": { 00:23:03.019 "name": "Nvme5", 00:23:03.019 "trtype": "tcp", 00:23:03.019 "traddr": "10.0.0.2", 00:23:03.019 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 },{ 00:23:03.019 "params": { 00:23:03.019 "name": "Nvme6", 00:23:03.019 "trtype": "tcp", 00:23:03.019 "traddr": "10.0.0.2", 00:23:03.019 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 },{ 00:23:03.019 "params": { 00:23:03.019 "name": "Nvme7", 00:23:03.019 "trtype": "tcp", 00:23:03.019 "traddr": "10.0.0.2", 00:23:03.019 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 },{ 00:23:03.019 "params": { 00:23:03.019 "name": "Nvme8", 00:23:03.019 "trtype": "tcp", 00:23:03.019 "traddr": "10.0.0.2", 00:23:03.019 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 },{ 00:23:03.019 "params": { 00:23:03.019 "name": "Nvme9", 00:23:03.019 "trtype": "tcp", 00:23:03.019 "traddr": "10.0.0.2", 00:23:03.019 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 },{ 00:23:03.019 "params": { 00:23:03.019 "name": "Nvme10", 00:23:03.019 "trtype": "tcp", 00:23:03.019 "traddr": "10.0.0.2", 00:23:03.019 "adrfam": "ipv4", 00:23:03.019 "trsvcid": "4420", 00:23:03.019 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.019 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.019 "hdgst": false, 00:23:03.019 "ddgst": false 00:23:03.019 }, 00:23:03.019 "method": "bdev_nvme_attach_controller" 00:23:03.019 }' 00:23:03.019 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.019 [2024-07-15 20:21:28.208101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.019 [2024-07-15 20:21:28.295877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.914 Running I/O for 1 seconds... 00:23:05.851 00:23:05.851 Latency(us) 00:23:05.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.851 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme1n1 : 1.22 210.36 13.15 0.00 0.00 300665.25 19660.80 305040.29 00:23:05.851 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme2n1 : 1.13 183.99 11.50 0.00 0.00 314140.00 10783.65 305040.29 00:23:05.851 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme3n1 : 1.17 163.51 10.22 0.00 0.00 371271.37 32887.16 337450.82 00:23:05.851 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme4n1 : 1.18 220.49 13.78 0.00 0.00 268636.88 9711.24 306946.79 00:23:05.851 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme5n1 : 1.23 208.13 13.01 0.00 0.00 280489.66 25022.84 301227.29 00:23:05.851 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme6n1 : 1.24 206.89 12.93 0.00 0.00 276363.87 24069.59 285975.27 00:23:05.851 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme7n1 : 1.22 157.20 9.82 0.00 0.00 355485.32 30384.87 337450.82 00:23:05.851 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme8n1 : 1.23 211.00 13.19 0.00 0.00 258357.12 5600.35 278349.27 00:23:05.851 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme9n1 : 1.24 206.21 12.89 0.00 0.00 259744.12 15490.33 285975.27 00:23:05.851 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:05.851 Verification LBA range: start 0x0 length 0x400 00:23:05.851 Nvme10n1 : 1.29 198.85 12.43 0.00 0.00 255928.09 18588.39 316479.30 00:23:05.851 =================================================================================================================== 00:23:05.851 Total : 1966.63 122.91 0.00 0.00 289897.05 5600.35 337450.82 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.110 rmmod nvme_tcp 00:23:06.110 rmmod nvme_fabrics 00:23:06.110 rmmod nvme_keyring 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 113093 ']' 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 113093 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 113093 ']' 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 113093 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113093 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113093' 00:23:06.110 killing process with pid 113093 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 113093 00:23:06.110 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 113093 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.679 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:08.605 00:23:08.605 real 0m15.031s 00:23:08.605 user 0m34.022s 00:23:08.605 sys 0m5.509s 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.605 ************************************ 00:23:08.605 END TEST nvmf_shutdown_tc1 00:23:08.605 ************************************ 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:08.605 ************************************ 00:23:08.605 START TEST nvmf_shutdown_tc2 00:23:08.605 ************************************ 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.605 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:08.863 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:08.863 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:08.863 Found net devices under 0000:af:00.0: cvl_0_0 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:08.863 Found net devices under 0000:af:00.1: cvl_0_1 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.863 20:21:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.863 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.863 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.863 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.863 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.863 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.863 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:23:09.121 00:23:09.121 --- 10.0.0.2 ping statistics --- 00:23:09.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.121 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:23:09.121 00:23:09.121 --- 10.0.0.1 ping statistics --- 00:23:09.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.121 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=114856 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 114856 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 114856 ']' 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.121 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.121 [2024-07-15 20:21:34.330339] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:09.121 [2024-07-15 20:21:34.330396] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.121 [2024-07-15 20:21:34.407243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.379 [2024-07-15 20:21:34.500707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.379 [2024-07-15 20:21:34.500749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.379 [2024-07-15 20:21:34.500759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.379 [2024-07-15 20:21:34.500768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.379 [2024-07-15 20:21:34.500775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.379 [2024-07-15 20:21:34.500833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.379 [2024-07-15 20:21:34.500921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.379 [2024-07-15 20:21:34.501033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.379 [2024-07-15 20:21:34.501033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.379 [2024-07-15 20:21:34.656742] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.379 20:21:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.637 Malloc1 00:23:09.637 [2024-07-15 20:21:34.757363] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.637 Malloc2 00:23:09.637 Malloc3 00:23:09.637 Malloc4 00:23:09.637 Malloc5 00:23:09.637 Malloc6 00:23:09.894 Malloc7 00:23:09.894 Malloc8 00:23:09.894 Malloc9 00:23:09.894 Malloc10 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=115163 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 115163 /var/tmp/bdevperf.sock 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 115163 ']' 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.894 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.895 { 00:23:09.895 "params": { 00:23:09.895 "name": "Nvme$subsystem", 00:23:09.895 "trtype": "$TEST_TRANSPORT", 00:23:09.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.895 "adrfam": "ipv4", 00:23:09.895 "trsvcid": "$NVMF_PORT", 00:23:09.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.895 "hdgst": ${hdgst:-false}, 00:23:09.895 "ddgst": ${ddgst:-false} 00:23:09.895 }, 00:23:09.895 "method": "bdev_nvme_attach_controller" 00:23:09.895 } 00:23:09.895 EOF 00:23:09.895 )") 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.895 { 00:23:09.895 "params": { 00:23:09.895 "name": "Nvme$subsystem", 00:23:09.895 "trtype": "$TEST_TRANSPORT", 00:23:09.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.895 "adrfam": "ipv4", 00:23:09.895 "trsvcid": "$NVMF_PORT", 00:23:09.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.895 "hdgst": ${hdgst:-false}, 00:23:09.895 "ddgst": ${ddgst:-false} 00:23:09.895 }, 00:23:09.895 "method": "bdev_nvme_attach_controller" 00:23:09.895 } 00:23:09.895 EOF 00:23:09.895 )") 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.895 { 00:23:09.895 "params": { 00:23:09.895 "name": "Nvme$subsystem", 00:23:09.895 "trtype": "$TEST_TRANSPORT", 00:23:09.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.895 "adrfam": "ipv4", 00:23:09.895 "trsvcid": "$NVMF_PORT", 00:23:09.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.895 "hdgst": ${hdgst:-false}, 00:23:09.895 "ddgst": ${ddgst:-false} 00:23:09.895 }, 00:23:09.895 "method": "bdev_nvme_attach_controller" 00:23:09.895 } 00:23:09.895 EOF 00:23:09.895 )") 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.895 { 00:23:09.895 "params": { 00:23:09.895 "name": "Nvme$subsystem", 00:23:09.895 "trtype": "$TEST_TRANSPORT", 00:23:09.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.895 "adrfam": "ipv4", 00:23:09.895 "trsvcid": "$NVMF_PORT", 00:23:09.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.895 "hdgst": ${hdgst:-false}, 00:23:09.895 "ddgst": ${ddgst:-false} 00:23:09.895 }, 00:23:09.895 "method": "bdev_nvme_attach_controller" 00:23:09.895 } 00:23:09.895 EOF 00:23:09.895 )") 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.895 { 00:23:09.895 "params": { 00:23:09.895 "name": "Nvme$subsystem", 00:23:09.895 "trtype": "$TEST_TRANSPORT", 00:23:09.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.895 "adrfam": "ipv4", 00:23:09.895 "trsvcid": "$NVMF_PORT", 00:23:09.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.895 "hdgst": ${hdgst:-false}, 00:23:09.895 "ddgst": ${ddgst:-false} 00:23:09.895 }, 00:23:09.895 "method": "bdev_nvme_attach_controller" 00:23:09.895 } 00:23:09.895 EOF 00:23:09.895 )") 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.895 { 00:23:09.895 "params": { 00:23:09.895 "name": "Nvme$subsystem", 00:23:09.895 "trtype": "$TEST_TRANSPORT", 00:23:09.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.895 "adrfam": "ipv4", 00:23:09.895 "trsvcid": "$NVMF_PORT", 00:23:09.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.895 "hdgst": ${hdgst:-false}, 00:23:09.895 "ddgst": ${ddgst:-false} 00:23:09.895 }, 00:23:09.895 "method": "bdev_nvme_attach_controller" 00:23:09.895 } 00:23:09.895 EOF 00:23:09.895 )") 00:23:09.895 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.151 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.151 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.151 { 00:23:10.151 "params": { 00:23:10.151 "name": "Nvme$subsystem", 00:23:10.151 "trtype": "$TEST_TRANSPORT", 00:23:10.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.151 "adrfam": "ipv4", 00:23:10.151 "trsvcid": "$NVMF_PORT", 00:23:10.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.151 "hdgst": ${hdgst:-false}, 00:23:10.151 "ddgst": ${ddgst:-false} 00:23:10.151 }, 00:23:10.151 "method": "bdev_nvme_attach_controller" 00:23:10.151 } 00:23:10.151 EOF 00:23:10.151 )") 00:23:10.151 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.151 [2024-07-15 20:21:35.249528] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:10.151 [2024-07-15 20:21:35.249575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115163 ] 00:23:10.151 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.151 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.151 { 00:23:10.151 "params": { 00:23:10.151 "name": "Nvme$subsystem", 00:23:10.151 "trtype": "$TEST_TRANSPORT", 00:23:10.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.151 "adrfam": "ipv4", 00:23:10.151 "trsvcid": "$NVMF_PORT", 00:23:10.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.151 "hdgst": ${hdgst:-false}, 00:23:10.151 "ddgst": ${ddgst:-false} 00:23:10.151 }, 00:23:10.151 "method": "bdev_nvme_attach_controller" 00:23:10.151 } 00:23:10.151 EOF 00:23:10.152 )") 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.152 { 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme$subsystem", 00:23:10.152 "trtype": "$TEST_TRANSPORT", 00:23:10.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "$NVMF_PORT", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.152 "hdgst": ${hdgst:-false}, 00:23:10.152 "ddgst": ${ddgst:-false} 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 } 00:23:10.152 EOF 00:23:10.152 )") 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.152 { 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme$subsystem", 00:23:10.152 "trtype": "$TEST_TRANSPORT", 00:23:10.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "$NVMF_PORT", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.152 "hdgst": ${hdgst:-false}, 00:23:10.152 "ddgst": ${ddgst:-false} 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 } 00:23:10.152 EOF 00:23:10.152 )") 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:10.152 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:10.152 20:21:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme1", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme2", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme3", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme4", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme5", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme6", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme7", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme8", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme9", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 },{ 00:23:10.152 "params": { 00:23:10.152 "name": "Nvme10", 00:23:10.152 "trtype": "tcp", 00:23:10.152 "traddr": "10.0.0.2", 00:23:10.152 "adrfam": "ipv4", 00:23:10.152 "trsvcid": "4420", 00:23:10.152 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.152 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.152 "hdgst": false, 00:23:10.152 "ddgst": false 00:23:10.152 }, 00:23:10.152 "method": "bdev_nvme_attach_controller" 00:23:10.152 }' 00:23:10.152 [2024-07-15 20:21:35.320614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.153 [2024-07-15 20:21:35.407380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.524 Running I/O for 10 seconds... 00:23:11.782 20:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.782 20:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:11.782 20:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.782 20:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.782 20:21:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.782 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.039 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:12.039 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:12.039 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:12.297 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:12.555 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 115163 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 115163 ']' 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 115163 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 115163 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 115163' 00:23:12.556 killing process with pid 115163 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 115163 00:23:12.556 20:21:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 115163 00:23:12.556 Received shutdown signal, test time was about 1.015367 seconds 00:23:12.556 00:23:12.556 Latency(us) 00:23:12.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.556 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme1n1 : 0.98 196.04 12.25 0.00 0.00 321967.01 19660.80 299320.79 00:23:12.556 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme2n1 : 0.99 199.84 12.49 0.00 0.00 304770.68 11856.06 293601.28 00:23:12.556 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme3n1 : 1.00 191.25 11.95 0.00 0.00 311163.81 36223.53 301227.29 00:23:12.556 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme4n1 : 0.97 198.61 12.41 0.00 0.00 293151.96 21448.15 299320.79 00:23:12.556 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme5n1 : 0.96 199.32 12.46 0.00 0.00 285157.93 25022.84 306946.79 00:23:12.556 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme6n1 : 1.00 191.63 11.98 0.00 0.00 290113.47 25380.31 310759.80 00:23:12.556 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme7n1 : 1.01 187.31 11.71 0.00 0.00 288786.42 18945.86 326011.81 00:23:12.556 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme8n1 : 0.99 194.07 12.13 0.00 0.00 270053.62 25022.84 297414.28 00:23:12.556 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme9n1 : 1.00 192.28 12.02 0.00 0.00 265300.40 17873.45 314572.80 00:23:12.556 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.556 Verification LBA range: start 0x0 length 0x400 00:23:12.556 Nvme10n1 : 1.01 190.37 11.90 0.00 0.00 260888.82 23235.49 337450.82 00:23:12.556 =================================================================================================================== 00:23:12.556 Total : 1940.72 121.29 0.00 0.00 289176.43 11856.06 337450.82 00:23:12.814 20:21:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 114856 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.218 rmmod nvme_tcp 00:23:14.218 rmmod nvme_fabrics 00:23:14.218 rmmod nvme_keyring 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 114856 ']' 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 114856 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 114856 ']' 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 114856 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114856 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114856' 00:23:14.218 killing process with pid 114856 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 114856 00:23:14.218 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 114856 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.540 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.451 00:23:16.451 real 0m7.775s 00:23:16.451 user 0m23.324s 00:23:16.451 sys 0m1.320s 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:16.451 ************************************ 00:23:16.451 END TEST nvmf_shutdown_tc2 00:23:16.451 ************************************ 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:16.451 20:21:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.710 ************************************ 00:23:16.710 START TEST nvmf_shutdown_tc3 00:23:16.710 ************************************ 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:16.710 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:16.710 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.710 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:16.711 Found net devices under 0000:af:00.0: cvl_0_0 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:16.711 Found net devices under 0000:af:00.1: cvl_0_1 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.711 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:23:16.969 00:23:16.969 --- 10.0.0.2 ping statistics --- 00:23:16.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.969 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:16.969 00:23:16.969 --- 10.0.0.1 ping statistics --- 00:23:16.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.969 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=116411 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 116411 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 116411 ']' 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.969 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.969 [2024-07-15 20:21:42.203048] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:16.969 [2024-07-15 20:21:42.203103] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.969 [2024-07-15 20:21:42.280868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.225 [2024-07-15 20:21:42.374702] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.225 [2024-07-15 20:21:42.374744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.225 [2024-07-15 20:21:42.374754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.225 [2024-07-15 20:21:42.374763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.225 [2024-07-15 20:21:42.374770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.225 [2024-07-15 20:21:42.374815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.225 [2024-07-15 20:21:42.374902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.225 [2024-07-15 20:21:42.375015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.225 [2024-07-15 20:21:42.375014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.154 [2024-07-15 20:21:43.186973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.154 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.154 Malloc1 00:23:18.154 [2024-07-15 20:21:43.287132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.154 Malloc2 00:23:18.154 Malloc3 00:23:18.154 Malloc4 00:23:18.154 Malloc5 00:23:18.154 Malloc6 00:23:18.412 Malloc7 00:23:18.412 Malloc8 00:23:18.412 Malloc9 00:23:18.412 Malloc10 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=116768 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 116768 /var/tmp/bdevperf.sock 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 116768 ']' 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.412 { 00:23:18.412 "params": { 00:23:18.412 "name": "Nvme$subsystem", 00:23:18.412 "trtype": "$TEST_TRANSPORT", 00:23:18.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.412 "adrfam": "ipv4", 00:23:18.412 "trsvcid": "$NVMF_PORT", 00:23:18.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.412 "hdgst": ${hdgst:-false}, 00:23:18.412 "ddgst": ${ddgst:-false} 00:23:18.412 }, 00:23:18.412 "method": "bdev_nvme_attach_controller" 00:23:18.412 } 00:23:18.412 EOF 00:23:18.412 )") 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.412 { 00:23:18.412 "params": { 00:23:18.412 "name": "Nvme$subsystem", 00:23:18.412 "trtype": "$TEST_TRANSPORT", 00:23:18.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.412 "adrfam": "ipv4", 00:23:18.412 "trsvcid": "$NVMF_PORT", 00:23:18.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.412 "hdgst": ${hdgst:-false}, 00:23:18.412 "ddgst": ${ddgst:-false} 00:23:18.412 }, 00:23:18.412 "method": "bdev_nvme_attach_controller" 00:23:18.412 } 00:23:18.412 EOF 00:23:18.412 )") 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.412 { 00:23:18.412 "params": { 00:23:18.412 "name": "Nvme$subsystem", 00:23:18.412 "trtype": "$TEST_TRANSPORT", 00:23:18.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.412 "adrfam": "ipv4", 00:23:18.412 "trsvcid": "$NVMF_PORT", 00:23:18.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.412 "hdgst": ${hdgst:-false}, 00:23:18.412 "ddgst": ${ddgst:-false} 00:23:18.412 }, 00:23:18.412 "method": "bdev_nvme_attach_controller" 00:23:18.412 } 00:23:18.412 EOF 00:23:18.412 )") 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.412 { 00:23:18.412 "params": { 00:23:18.412 "name": "Nvme$subsystem", 00:23:18.412 "trtype": "$TEST_TRANSPORT", 00:23:18.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.412 "adrfam": "ipv4", 00:23:18.412 "trsvcid": "$NVMF_PORT", 00:23:18.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.412 "hdgst": ${hdgst:-false}, 00:23:18.412 "ddgst": ${ddgst:-false} 00:23:18.412 }, 00:23:18.412 "method": "bdev_nvme_attach_controller" 00:23:18.412 } 00:23:18.412 EOF 00:23:18.412 )") 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.412 { 00:23:18.412 "params": { 00:23:18.412 "name": "Nvme$subsystem", 00:23:18.412 "trtype": "$TEST_TRANSPORT", 00:23:18.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.412 "adrfam": "ipv4", 00:23:18.412 "trsvcid": "$NVMF_PORT", 00:23:18.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.412 "hdgst": ${hdgst:-false}, 00:23:18.412 "ddgst": ${ddgst:-false} 00:23:18.412 }, 00:23:18.412 "method": "bdev_nvme_attach_controller" 00:23:18.412 } 00:23:18.412 EOF 00:23:18.412 )") 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.412 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.412 { 00:23:18.412 "params": { 00:23:18.412 "name": "Nvme$subsystem", 00:23:18.412 "trtype": "$TEST_TRANSPORT", 00:23:18.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.412 "adrfam": "ipv4", 00:23:18.412 "trsvcid": "$NVMF_PORT", 00:23:18.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.412 "hdgst": ${hdgst:-false}, 00:23:18.412 "ddgst": ${ddgst:-false} 00:23:18.412 }, 00:23:18.412 "method": "bdev_nvme_attach_controller" 00:23:18.412 } 00:23:18.412 EOF 00:23:18.412 )") 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.706 { 00:23:18.706 "params": { 00:23:18.706 "name": "Nvme$subsystem", 00:23:18.706 "trtype": "$TEST_TRANSPORT", 00:23:18.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.706 "adrfam": "ipv4", 00:23:18.706 "trsvcid": "$NVMF_PORT", 00:23:18.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.706 "hdgst": ${hdgst:-false}, 00:23:18.706 "ddgst": ${ddgst:-false} 00:23:18.706 }, 00:23:18.706 "method": "bdev_nvme_attach_controller" 00:23:18.706 } 00:23:18.706 EOF 00:23:18.706 )") 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.706 [2024-07-15 20:21:43.772401] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:18.706 [2024-07-15 20:21:43.772460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116768 ] 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.706 { 00:23:18.706 "params": { 00:23:18.706 "name": "Nvme$subsystem", 00:23:18.706 "trtype": "$TEST_TRANSPORT", 00:23:18.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.706 "adrfam": "ipv4", 00:23:18.706 "trsvcid": "$NVMF_PORT", 00:23:18.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.706 "hdgst": ${hdgst:-false}, 00:23:18.706 "ddgst": ${ddgst:-false} 00:23:18.706 }, 00:23:18.706 "method": "bdev_nvme_attach_controller" 00:23:18.706 } 00:23:18.706 EOF 00:23:18.706 )") 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.706 { 00:23:18.706 "params": { 00:23:18.706 "name": "Nvme$subsystem", 00:23:18.706 "trtype": "$TEST_TRANSPORT", 00:23:18.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.706 "adrfam": "ipv4", 00:23:18.706 "trsvcid": "$NVMF_PORT", 00:23:18.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.706 "hdgst": ${hdgst:-false}, 00:23:18.706 "ddgst": ${ddgst:-false} 00:23:18.706 }, 00:23:18.706 "method": "bdev_nvme_attach_controller" 00:23:18.706 } 00:23:18.706 EOF 00:23:18.706 )") 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.706 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.706 { 00:23:18.706 "params": { 00:23:18.706 "name": "Nvme$subsystem", 00:23:18.706 "trtype": "$TEST_TRANSPORT", 00:23:18.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.706 "adrfam": "ipv4", 00:23:18.706 "trsvcid": "$NVMF_PORT", 00:23:18.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.706 "hdgst": ${hdgst:-false}, 00:23:18.707 "ddgst": ${ddgst:-false} 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 } 00:23:18.707 EOF 00:23:18.707 )") 00:23:18.707 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.707 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:18.707 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:18.707 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme1", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme2", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme3", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme4", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme5", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme6", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme7", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme8", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme9", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 },{ 00:23:18.707 "params": { 00:23:18.707 "name": "Nvme10", 00:23:18.707 "trtype": "tcp", 00:23:18.707 "traddr": "10.0.0.2", 00:23:18.707 "adrfam": "ipv4", 00:23:18.707 "trsvcid": "4420", 00:23:18.707 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.707 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.707 "hdgst": false, 00:23:18.707 "ddgst": false 00:23:18.707 }, 00:23:18.707 "method": "bdev_nvme_attach_controller" 00:23:18.707 }' 00:23:18.707 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.707 [2024-07-15 20:21:43.855888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.707 [2024-07-15 20:21:43.943577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.078 Running I/O for 10 seconds... 00:23:20.078 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.078 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:20.078 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:20.078 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.078 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:20.336 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:20.594 20:21:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:20.852 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:20.852 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 116411 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 116411 ']' 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 116411 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.853 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116411 00:23:21.126 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:21.126 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:21.126 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116411' 00:23:21.126 killing process with pid 116411 00:23:21.126 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 116411 00:23:21.126 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 116411 00:23:21.126 [2024-07-15 20:21:46.215601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.215994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.126 [2024-07-15 20:21:46.216094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.216103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.216113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.216122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.216130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.216138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f270 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.219995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.220126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f750 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.221994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.222012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.222031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.127 [2024-07-15 20:21:46.222049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.222791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190fc50 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.225996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.128 [2024-07-15 20:21:46.226213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.226336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910b10 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.228993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19114d0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.129 [2024-07-15 20:21:46.229938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.229955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.229963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.229971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.229979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.229987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.229995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.230348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19119b0 is same with the state(5) to be set 00:23:21.130 [2024-07-15 20:21:46.241263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.130 [2024-07-15 20:21:46.241326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.130 [2024-07-15 20:21:46.241349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.130 [2024-07-15 20:21:46.241376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.130 [2024-07-15 20:21:46.241398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.130 [2024-07-15 20:21:46.241419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.130 [2024-07-15 20:21:46.241442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.130 [2024-07-15 20:21:46.241452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.241985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.241994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.131 [2024-07-15 20:21:46.242362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.131 [2024-07-15 20:21:46.242374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.242684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.242720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.132 [2024-07-15 20:21:46.242782] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x173c2a0 was disconnected and freed. reset controller. 00:23:21.132 [2024-07-15 20:21:46.242988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.132 [2024-07-15 20:21:46.243587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.132 [2024-07-15 20:21:46.243599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.243987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.243997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.244377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.133 [2024-07-15 20:21:46.244387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.245251] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1740040 was disconnected and freed. reset controller. 00:23:21.133 [2024-07-15 20:21:46.245351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.133 [2024-07-15 20:21:46.245365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.245377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.133 [2024-07-15 20:21:46.245387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.245397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.133 [2024-07-15 20:21:46.245407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.245417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.133 [2024-07-15 20:21:46.245427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.133 [2024-07-15 20:21:46.245436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9630 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.245469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d20b0 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.245577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639a80 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.245694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638ce0 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.245801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165a540 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.245915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.245988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.245997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d59f0 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.246029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639e00 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.246138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.134 [2024-07-15 20:21:46.246208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.134 [2024-07-15 20:21:46.246218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c00 is same with the state(5) to be set 00:23:21.134 [2024-07-15 20:21:46.246249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1dd0 is same with the state(5) to be set 00:23:21.135 [2024-07-15 20:21:46.246369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.135 [2024-07-15 20:21:46.246440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616120 is same with the state(5) to be set 00:23:21.135 [2024-07-15 20:21:46.246939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.135 [2024-07-15 20:21:46.246965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.135 [2024-07-15 20:21:46.246981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.135 [2024-07-15 20:21:46.246991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.136 [2024-07-15 20:21:46.247788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.136 [2024-07-15 20:21:46.247798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.247981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.247990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.137 [2024-07-15 20:21:46.248440] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x178df70 was disconnected and freed. reset controller. 00:23:21.137 [2024-07-15 20:21:46.248484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.137 [2024-07-15 20:21:46.248829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.137 [2024-07-15 20:21:46.248841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.248990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.248999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.138 [2024-07-15 20:21:46.249628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.138 [2024-07-15 20:21:46.249640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.249859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.249869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1610100 is same with the state(5) to be set 00:23:21.139 [2024-07-15 20:21:46.249932] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1610100 was disconnected and freed. reset controller. 00:23:21.139 [2024-07-15 20:21:46.255805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:21.139 [2024-07-15 20:21:46.255844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.139 [2024-07-15 20:21:46.255857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.139 [2024-07-15 20:21:46.255876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1201c00 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.255891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d20b0 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.255905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638ce0 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.255919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9630 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.255943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639a80 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.255961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a540 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.255984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d59f0 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.256005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639e00 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.256027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e1dd0 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.256045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616120 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.257078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.139 [2024-07-15 20:21:46.257198] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.139 [2024-07-15 20:21:46.257980] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.139 [2024-07-15 20:21:46.258050] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.139 [2024-07-15 20:21:46.258102] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.139 [2024-07-15 20:21:46.258445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.139 [2024-07-15 20:21:46.258466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1638ce0 with addr=10.0.0.2, port=4420 00:23:21.139 [2024-07-15 20:21:46.258476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638ce0 is same with the state(5) to be set 00:23:21.139 [2024-07-15 20:21:46.258661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.139 [2024-07-15 20:21:46.258676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d20b0 with addr=10.0.0.2, port=4420 00:23:21.139 [2024-07-15 20:21:46.258686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d20b0 is same with the state(5) to be set 00:23:21.139 [2024-07-15 20:21:46.258796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.139 [2024-07-15 20:21:46.258810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1201c00 with addr=10.0.0.2, port=4420 00:23:21.139 [2024-07-15 20:21:46.258819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c00 is same with the state(5) to be set 00:23:21.139 [2024-07-15 20:21:46.259099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.139 [2024-07-15 20:21:46.259113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1639a80 with addr=10.0.0.2, port=4420 00:23:21.139 [2024-07-15 20:21:46.259122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639a80 is same with the state(5) to be set 00:23:21.139 [2024-07-15 20:21:46.259190] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.139 [2024-07-15 20:21:46.259231] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.139 [2024-07-15 20:21:46.259327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638ce0 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.259344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d20b0 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.259356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1201c00 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.259368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639a80 (9): Bad file descriptor 00:23:21.139 [2024-07-15 20:21:46.259435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:21.139 [2024-07-15 20:21:46.259446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:21.139 [2024-07-15 20:21:46.259457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:21.139 [2024-07-15 20:21:46.259473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.139 [2024-07-15 20:21:46.259482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.139 [2024-07-15 20:21:46.259490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.139 [2024-07-15 20:21:46.259505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.139 [2024-07-15 20:21:46.259513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.139 [2024-07-15 20:21:46.259522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.139 [2024-07-15 20:21:46.259536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.139 [2024-07-15 20:21:46.259544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.139 [2024-07-15 20:21:46.259553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.139 [2024-07-15 20:21:46.259603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.139 [2024-07-15 20:21:46.259613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.139 [2024-07-15 20:21:46.259621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.139 [2024-07-15 20:21:46.259628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.139 [2024-07-15 20:21:46.265985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.266005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.266023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.266034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.266046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.266056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.266067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.266082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.139 [2024-07-15 20:21:46.266094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.139 [2024-07-15 20:21:46.266104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.140 [2024-07-15 20:21:46.266846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.140 [2024-07-15 20:21:46.266856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.266876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.266900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.266920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.266963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.266984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.267369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.267379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a130 is same with the state(5) to be set 00:23:21.141 [2024-07-15 20:21:46.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.268872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.268886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.268899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.268912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.268922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.268934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.268943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.268955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.268964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.268976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.268985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.268997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.269006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.269018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.269028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.269039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.269049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.141 [2024-07-15 20:21:46.269061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.141 [2024-07-15 20:21:46.269069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.142 [2024-07-15 20:21:46.269947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.142 [2024-07-15 20:21:46.269956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.269968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.269977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.269990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.270232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.270242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b540 is same with the state(5) to be set 00:23:21.143 [2024-07-15 20:21:46.271677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.271984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.271995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.143 [2024-07-15 20:21:46.272307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.143 [2024-07-15 20:21:46.272319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.144 [2024-07-15 20:21:46.272889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.144 [2024-07-15 20:21:46.272901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.272910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.272922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.272931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.272943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.272952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.272964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.272973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.272985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.272994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.273006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.273015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.273027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.273038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.273050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.273059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.273072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178ca10 is same with the state(5) to be set 00:23:21.145 [2024-07-15 20:21:46.274551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.274986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.274998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.145 [2024-07-15 20:21:46.275222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.145 [2024-07-15 20:21:46.275234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.275937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.275948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16115d0 is same with the state(5) to be set 00:23:21.146 [2024-07-15 20:21:46.277430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.146 [2024-07-15 20:21:46.277623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.146 [2024-07-15 20:21:46.277638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.277980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.277990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.147 [2024-07-15 20:21:46.278559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.147 [2024-07-15 20:21:46.278568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.278822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.278833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173d790 is same with the state(5) to be set 00:23:21.148 [2024-07-15 20:21:46.280292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.148 [2024-07-15 20:21:46.280926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.148 [2024-07-15 20:21:46.280936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.280948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.280957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.280969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.280979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.280996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.149 [2024-07-15 20:21:46.281675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-07-15 20:21:46.281685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173eb50 is same with the state(5) to be set 00:23:21.149 [2024-07-15 20:21:46.283412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.150 [2024-07-15 20:21:46.283437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.150 [2024-07-15 20:21:46.283450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.150 [2024-07-15 20:21:46.283464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:21.150 [2024-07-15 20:21:46.283558] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.150 [2024-07-15 20:21:46.283574] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.150 [2024-07-15 20:21:46.283664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.150 task offset: 23040 on job bdev=Nvme7n1 fails 00:23:21.150 00:23:21.150 Latency(us) 00:23:21.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.150 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme1n1 ended in about 1.00 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme1n1 : 1.00 127.58 7.97 63.79 0.00 330116.50 20018.27 305040.29 00:23:21.150 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme2n1 ended in about 1.01 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme2n1 : 1.01 127.22 7.95 63.61 0.00 323193.02 37176.79 278349.27 00:23:21.150 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme3n1 ended in about 1.01 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme3n1 : 1.01 63.43 3.96 63.43 0.00 474746.88 32648.84 381300.36 00:23:21.150 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme4n1 ended in about 0.99 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme4n1 : 0.99 194.16 12.13 64.72 0.00 226165.12 6434.44 301227.29 00:23:21.150 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme5n1 ended in about 0.99 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme5n1 : 0.99 193.90 12.12 64.63 0.00 220568.90 10604.92 303133.79 00:23:21.150 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme6n1 ended in about 1.01 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme6n1 : 1.01 126.50 7.91 63.25 0.00 293683.82 28240.06 308853.29 00:23:21.150 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme7n1 ended in about 0.99 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme7n1 : 0.99 129.82 8.11 64.91 0.00 277167.63 9175.04 316479.30 00:23:21.150 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme8n1 ended in about 1.01 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme8n1 : 1.01 126.14 7.88 63.07 0.00 278900.05 20494.89 305040.29 00:23:21.150 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme9n1 ended in about 1.02 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme9n1 : 1.02 125.79 7.86 62.90 0.00 271955.47 17277.67 265003.75 00:23:21.150 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.150 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:21.150 Verification LBA range: start 0x0 length 0x400 00:23:21.150 Nvme10n1 : 0.99 129.64 8.10 64.82 0.00 254175.42 49330.73 305040.29 00:23:21.150 =================================================================================================================== 00:23:21.150 Total : 1344.19 84.01 639.13 0.00 284645.34 6434.44 381300.36 00:23:21.150 [2024-07-15 20:21:46.316699] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.150 [2024-07-15 20:21:46.316743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:21.150 [2024-07-15 20:21:46.317124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.317146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616120 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.317158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616120 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.317390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.317405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1dd0 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.317416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1dd0 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.317675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.317690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e9630 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.317700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e9630 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.317915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.317929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1639e00 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.317939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639e00 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.319952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.150 [2024-07-15 20:21:46.319974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.150 [2024-07-15 20:21:46.319987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.150 [2024-07-15 20:21:46.319999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:21.150 [2024-07-15 20:21:46.320325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.320345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d59f0 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.320355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d59f0 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.320534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.320549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a540 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.320559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165a540 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.320574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616120 (9): Bad file descriptor 00:23:21.150 [2024-07-15 20:21:46.320590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e1dd0 (9): Bad file descriptor 00:23:21.150 [2024-07-15 20:21:46.320603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e9630 (9): Bad file descriptor 00:23:21.150 [2024-07-15 20:21:46.320616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639e00 (9): Bad file descriptor 00:23:21.150 [2024-07-15 20:21:46.320656] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.150 [2024-07-15 20:21:46.320671] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.150 [2024-07-15 20:21:46.320689] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.150 [2024-07-15 20:21:46.320703] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.150 [2024-07-15 20:21:46.321348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.321373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1639a80 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.321385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1639a80 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.321589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.321604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1201c00 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.321614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1201c00 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.321743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.321758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d20b0 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.321768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d20b0 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.321946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.150 [2024-07-15 20:21:46.321961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1638ce0 with addr=10.0.0.2, port=4420 00:23:21.150 [2024-07-15 20:21:46.321971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1638ce0 is same with the state(5) to be set 00:23:21.150 [2024-07-15 20:21:46.321985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d59f0 (9): Bad file descriptor 00:23:21.150 [2024-07-15 20:21:46.321999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a540 (9): Bad file descriptor 00:23:21.150 [2024-07-15 20:21:46.322011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.150 [2024-07-15 20:21:46.322020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.150 [2024-07-15 20:21:46.322031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.150 [2024-07-15 20:21:46.322046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.150 [2024-07-15 20:21:46.322055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.150 [2024-07-15 20:21:46.322064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.150 [2024-07-15 20:21:46.322078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.150 [2024-07-15 20:21:46.322087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.150 [2024-07-15 20:21:46.322096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.150 [2024-07-15 20:21:46.322111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:21.150 [2024-07-15 20:21:46.322119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:21.150 [2024-07-15 20:21:46.322129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:21.150 [2024-07-15 20:21:46.322219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.150 [2024-07-15 20:21:46.322230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.150 [2024-07-15 20:21:46.322238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.150 [2024-07-15 20:21:46.322246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.150 [2024-07-15 20:21:46.322262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1639a80 (9): Bad file descriptor 00:23:21.151 [2024-07-15 20:21:46.322274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1201c00 (9): Bad file descriptor 00:23:21.151 [2024-07-15 20:21:46.322287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d20b0 (9): Bad file descriptor 00:23:21.151 [2024-07-15 20:21:46.322299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1638ce0 (9): Bad file descriptor 00:23:21.151 [2024-07-15 20:21:46.322310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.151 [2024-07-15 20:21:46.322319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.151 [2024-07-15 20:21:46.322328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.151 [2024-07-15 20:21:46.322340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:21.151 [2024-07-15 20:21:46.322349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:21.151 [2024-07-15 20:21:46.322362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:21.151 [2024-07-15 20:21:46.322398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.151 [2024-07-15 20:21:46.322407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.151 [2024-07-15 20:21:46.322416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.151 [2024-07-15 20:21:46.322424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.151 [2024-07-15 20:21:46.322433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.151 [2024-07-15 20:21:46.322445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.151 [2024-07-15 20:21:46.322454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.151 [2024-07-15 20:21:46.322463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.151 [2024-07-15 20:21:46.322475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.151 [2024-07-15 20:21:46.322484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.151 [2024-07-15 20:21:46.322493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.151 [2024-07-15 20:21:46.322505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:21.151 [2024-07-15 20:21:46.322514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:21.151 [2024-07-15 20:21:46.322522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:21.151 [2024-07-15 20:21:46.322559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.151 [2024-07-15 20:21:46.322568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.151 [2024-07-15 20:21:46.322576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.151 [2024-07-15 20:21:46.322584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.411 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:21.411 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:22.345 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 116768 00:23:22.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (116768) - No such process 00:23:22.345 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:22.345 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:22.345 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.603 rmmod nvme_tcp 00:23:22.603 rmmod nvme_fabrics 00:23:22.603 rmmod nvme_keyring 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.603 20:21:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.506 00:23:24.506 real 0m8.008s 00:23:24.506 user 0m20.092s 00:23:24.506 sys 0m1.327s 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:24.506 ************************************ 00:23:24.506 END TEST nvmf_shutdown_tc3 00:23:24.506 ************************************ 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:24.506 00:23:24.506 real 0m31.148s 00:23:24.506 user 1m17.572s 00:23:24.506 sys 0m8.380s 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.506 20:21:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.506 ************************************ 00:23:24.506 END TEST nvmf_shutdown 00:23:24.506 ************************************ 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:24.764 20:21:49 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.764 20:21:49 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.764 20:21:49 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:24.764 20:21:49 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.764 20:21:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.764 ************************************ 00:23:24.764 START TEST nvmf_multicontroller 00:23:24.764 ************************************ 00:23:24.764 20:21:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:24.764 * Looking for test storage... 00:23:24.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:24.764 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.765 20:21:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:30.035 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.035 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:30.036 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:30.036 Found net devices under 0000:af:00.0: cvl_0_0 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:30.036 Found net devices under 0000:af:00.1: cvl_0_1 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.036 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:30.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:23:30.294 00:23:30.294 --- 10.0.0.2 ping statistics --- 00:23:30.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.294 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:23:30.294 00:23:30.294 --- 10.0.0.1 ping statistics --- 00:23:30.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.294 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:30.294 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=120963 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 120963 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 120963 ']' 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.295 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.295 [2024-07-15 20:21:55.635604] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:30.295 [2024-07-15 20:21:55.635660] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.553 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.553 [2024-07-15 20:21:55.711785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.553 [2024-07-15 20:21:55.803016] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.553 [2024-07-15 20:21:55.803059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.553 [2024-07-15 20:21:55.803068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.553 [2024-07-15 20:21:55.803077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.553 [2024-07-15 20:21:55.803085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.553 [2024-07-15 20:21:55.803145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.553 [2024-07-15 20:21:55.803238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.553 [2024-07-15 20:21:55.803241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.811 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.811 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:30.811 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 [2024-07-15 20:21:55.948461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 Malloc0 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 [2024-07-15 20:21:56.013187] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 [2024-07-15 20:21:56.021140] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 Malloc1 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=121225 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 121225 /var/tmp/bdevperf.sock 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 121225 ']' 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.812 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.071 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.071 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:31.071 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:31.071 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.071 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.330 NVMe0n1 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.330 1 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.330 request: 00:23:31.330 { 00:23:31.330 "name": "NVMe0", 00:23:31.330 "trtype": "tcp", 00:23:31.330 "traddr": "10.0.0.2", 00:23:31.330 "adrfam": "ipv4", 00:23:31.330 "trsvcid": "4420", 00:23:31.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.330 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:31.330 "hostaddr": "10.0.0.2", 00:23:31.330 "hostsvcid": "60000", 00:23:31.330 "prchk_reftag": false, 00:23:31.330 "prchk_guard": false, 00:23:31.330 "hdgst": false, 00:23:31.330 "ddgst": false, 00:23:31.330 "method": "bdev_nvme_attach_controller", 00:23:31.330 "req_id": 1 00:23:31.330 } 00:23:31.330 Got JSON-RPC error response 00:23:31.330 response: 00:23:31.330 { 00:23:31.330 "code": -114, 00:23:31.330 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:31.330 } 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.330 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.589 request: 00:23:31.589 { 00:23:31.589 "name": "NVMe0", 00:23:31.589 "trtype": "tcp", 00:23:31.589 "traddr": "10.0.0.2", 00:23:31.589 "adrfam": "ipv4", 00:23:31.589 "trsvcid": "4420", 00:23:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.589 "hostaddr": "10.0.0.2", 00:23:31.589 "hostsvcid": "60000", 00:23:31.589 "prchk_reftag": false, 00:23:31.589 "prchk_guard": false, 00:23:31.589 "hdgst": false, 00:23:31.589 "ddgst": false, 00:23:31.589 "method": "bdev_nvme_attach_controller", 00:23:31.589 "req_id": 1 00:23:31.589 } 00:23:31.589 Got JSON-RPC error response 00:23:31.589 response: 00:23:31.589 { 00:23:31.589 "code": -114, 00:23:31.589 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:31.589 } 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.589 request: 00:23:31.589 { 00:23:31.589 "name": "NVMe0", 00:23:31.589 "trtype": "tcp", 00:23:31.589 "traddr": "10.0.0.2", 00:23:31.589 "adrfam": "ipv4", 00:23:31.589 "trsvcid": "4420", 00:23:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.589 "hostaddr": "10.0.0.2", 00:23:31.589 "hostsvcid": "60000", 00:23:31.589 "prchk_reftag": false, 00:23:31.589 "prchk_guard": false, 00:23:31.589 "hdgst": false, 00:23:31.589 "ddgst": false, 00:23:31.589 "multipath": "disable", 00:23:31.589 "method": "bdev_nvme_attach_controller", 00:23:31.589 "req_id": 1 00:23:31.589 } 00:23:31.589 Got JSON-RPC error response 00:23:31.589 response: 00:23:31.589 { 00:23:31.589 "code": -114, 00:23:31.589 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:31.589 } 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.589 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.589 request: 00:23:31.589 { 00:23:31.589 "name": "NVMe0", 00:23:31.589 "trtype": "tcp", 00:23:31.589 "traddr": "10.0.0.2", 00:23:31.589 "adrfam": "ipv4", 00:23:31.589 "trsvcid": "4420", 00:23:31.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.589 "hostaddr": "10.0.0.2", 00:23:31.589 "hostsvcid": "60000", 00:23:31.589 "prchk_reftag": false, 00:23:31.589 "prchk_guard": false, 00:23:31.589 "hdgst": false, 00:23:31.590 "ddgst": false, 00:23:31.590 "multipath": "failover", 00:23:31.590 "method": "bdev_nvme_attach_controller", 00:23:31.590 "req_id": 1 00:23:31.590 } 00:23:31.590 Got JSON-RPC error response 00:23:31.590 response: 00:23:31.590 { 00:23:31.590 "code": -114, 00:23:31.590 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:31.590 } 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.590 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.590 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.848 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:31.848 20:21:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:32.781 0 00:23:32.781 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:32.781 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.781 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 121225 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 121225 ']' 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 121225 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121225 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121225' 00:23:33.038 killing process with pid 121225 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 121225 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 121225 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.038 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:33.296 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:33.296 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:33.296 [2024-07-15 20:21:56.129260] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:33.297 [2024-07-15 20:21:56.129325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121225 ] 00:23:33.297 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.297 [2024-07-15 20:21:56.209194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.297 [2024-07-15 20:21:56.294988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.297 [2024-07-15 20:21:56.950042] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 323041d7-69cc-4a5b-bafc-01ad727cc6b8 already exists 00:23:33.297 [2024-07-15 20:21:56.950076] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:323041d7-69cc-4a5b-bafc-01ad727cc6b8 alias for bdev NVMe1n1 00:23:33.297 [2024-07-15 20:21:56.950087] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:33.297 Running I/O for 1 seconds... 00:23:33.297 00:23:33.297 Latency(us) 00:23:33.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.297 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:33.297 NVMe0n1 : 1.01 16739.29 65.39 0.00 0.00 7624.65 4617.31 13881.72 00:23:33.297 =================================================================================================================== 00:23:33.297 Total : 16739.29 65.39 0.00 0.00 7624.65 4617.31 13881.72 00:23:33.297 Received shutdown signal, test time was about 1.000000 seconds 00:23:33.297 00:23:33.297 Latency(us) 00:23:33.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.297 =================================================================================================================== 00:23:33.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.297 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.297 rmmod nvme_tcp 00:23:33.297 rmmod nvme_fabrics 00:23:33.297 rmmod nvme_keyring 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 120963 ']' 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 120963 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 120963 ']' 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 120963 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120963 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120963' 00:23:33.297 killing process with pid 120963 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 120963 00:23:33.297 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 120963 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.555 20:21:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.087 20:22:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.087 00:23:36.087 real 0m10.903s 00:23:36.087 user 0m12.828s 00:23:36.087 sys 0m4.914s 00:23:36.087 20:22:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:36.087 20:22:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.087 ************************************ 00:23:36.087 END TEST nvmf_multicontroller 00:23:36.087 ************************************ 00:23:36.087 20:22:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:36.087 20:22:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:36.087 20:22:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:36.087 20:22:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.087 20:22:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.087 ************************************ 00:23:36.087 START TEST nvmf_aer 00:23:36.087 ************************************ 00:23:36.087 20:22:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:36.087 * Looking for test storage... 00:23:36.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.087 20:22:01 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.087 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:36.087 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.088 20:22:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:41.352 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:41.352 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:41.352 Found net devices under 0000:af:00.0: cvl_0_0 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:41.352 Found net devices under 0000:af:00.1: cvl_0_1 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:23:41.352 00:23:41.352 --- 10.0.0.2 ping statistics --- 00:23:41.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.352 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:41.352 00:23:41.352 --- 10.0.0.1 ping statistics --- 00:23:41.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.352 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=125114 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 125114 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 125114 ']' 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.352 20:22:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.352 [2024-07-15 20:22:06.482775] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:41.353 [2024-07-15 20:22:06.482833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.353 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.353 [2024-07-15 20:22:06.568043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.353 [2024-07-15 20:22:06.660388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.353 [2024-07-15 20:22:06.660430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.353 [2024-07-15 20:22:06.660441] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.353 [2024-07-15 20:22:06.660449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.353 [2024-07-15 20:22:06.660457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.353 [2024-07-15 20:22:06.660508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.353 [2024-07-15 20:22:06.660529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.353 [2024-07-15 20:22:06.660624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.353 [2024-07-15 20:22:06.660627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 [2024-07-15 20:22:07.477086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 Malloc0 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 [2024-07-15 20:22:07.532774] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.286 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.286 [ 00:23:42.286 { 00:23:42.286 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:42.286 "subtype": "Discovery", 00:23:42.286 "listen_addresses": [], 00:23:42.286 "allow_any_host": true, 00:23:42.286 "hosts": [] 00:23:42.286 }, 00:23:42.286 { 00:23:42.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.286 "subtype": "NVMe", 00:23:42.286 "listen_addresses": [ 00:23:42.286 { 00:23:42.286 "trtype": "TCP", 00:23:42.286 "adrfam": "IPv4", 00:23:42.286 "traddr": "10.0.0.2", 00:23:42.286 "trsvcid": "4420" 00:23:42.286 } 00:23:42.286 ], 00:23:42.286 "allow_any_host": true, 00:23:42.286 "hosts": [], 00:23:42.286 "serial_number": "SPDK00000000000001", 00:23:42.286 "model_number": "SPDK bdev Controller", 00:23:42.286 "max_namespaces": 2, 00:23:42.286 "min_cntlid": 1, 00:23:42.286 "max_cntlid": 65519, 00:23:42.287 "namespaces": [ 00:23:42.287 { 00:23:42.287 "nsid": 1, 00:23:42.287 "bdev_name": "Malloc0", 00:23:42.287 "name": "Malloc0", 00:23:42.287 "nguid": "6C092F77260A49EF978DBD71DED801B4", 00:23:42.287 "uuid": "6c092f77-260a-49ef-978d-bd71ded801b4" 00:23:42.287 } 00:23:42.287 ] 00:23:42.287 } 00:23:42.287 ] 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=125258 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:42.287 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:42.287 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.545 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 Malloc1 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.803 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.803 [ 00:23:42.803 { 00:23:42.803 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:42.803 "subtype": "Discovery", 00:23:42.803 "listen_addresses": [], 00:23:42.803 "allow_any_host": true, 00:23:42.803 "hosts": [] 00:23:42.803 }, 00:23:42.803 { 00:23:42.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.803 "subtype": "NVMe", 00:23:42.803 "listen_addresses": [ 00:23:42.803 { 00:23:42.803 "trtype": "TCP", 00:23:42.803 "adrfam": "IPv4", 00:23:42.803 "traddr": "10.0.0.2", 00:23:42.803 "trsvcid": "4420" 00:23:42.803 } 00:23:42.803 ], 00:23:42.803 "allow_any_host": true, 00:23:42.803 "hosts": [], 00:23:42.803 "serial_number": "SPDK00000000000001", 00:23:42.804 "model_number": "SPDK bdev Controller", 00:23:42.804 "max_namespaces": 2, 00:23:42.804 "min_cntlid": 1, 00:23:42.804 "max_cntlid": 65519, 00:23:42.804 "namespaces": [ 00:23:42.804 { 00:23:42.804 "nsid": 1, 00:23:42.804 "bdev_name": "Malloc0", 00:23:42.804 "name": "Malloc0", 00:23:42.804 Asynchronous Event Request test 00:23:42.804 Attaching to 10.0.0.2 00:23:42.804 Attached to 10.0.0.2 00:23:42.804 Registering asynchronous event callbacks... 00:23:42.804 Starting namespace attribute notice tests for all controllers... 00:23:42.804 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:42.804 aer_cb - Changed Namespace 00:23:42.804 Cleaning up... 00:23:42.804 "nguid": "6C092F77260A49EF978DBD71DED801B4", 00:23:42.804 "uuid": "6c092f77-260a-49ef-978d-bd71ded801b4" 00:23:42.804 }, 00:23:42.804 { 00:23:42.804 "nsid": 2, 00:23:42.804 "bdev_name": "Malloc1", 00:23:42.804 "name": "Malloc1", 00:23:42.804 "nguid": "AD15BC1567EF477995B7A7C421CAE5C4", 00:23:42.804 "uuid": "ad15bc15-67ef-4779-95b7-a7c421cae5c4" 00:23:42.804 } 00:23:42.804 ] 00:23:42.804 } 00:23:42.804 ] 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 125258 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.804 20:22:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:42.804 rmmod nvme_tcp 00:23:42.804 rmmod nvme_fabrics 00:23:42.804 rmmod nvme_keyring 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 125114 ']' 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 125114 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 125114 ']' 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 125114 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125114 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125114' 00:23:42.804 killing process with pid 125114 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 125114 00:23:42.804 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 125114 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.063 20:22:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.596 20:22:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:45.596 00:23:45.596 real 0m9.490s 00:23:45.596 user 0m8.274s 00:23:45.596 sys 0m4.567s 00:23:45.596 20:22:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:45.596 20:22:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.596 ************************************ 00:23:45.596 END TEST nvmf_aer 00:23:45.596 ************************************ 00:23:45.596 20:22:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:45.596 20:22:10 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:45.596 20:22:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:45.596 20:22:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.596 20:22:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.596 ************************************ 00:23:45.596 START TEST nvmf_async_init 00:23:45.596 ************************************ 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:45.596 * Looking for test storage... 00:23:45.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=86246063a7a241fe9942f3c1aeb2f433 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:45.596 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:45.597 20:22:10 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:45.597 20:22:10 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:50.883 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:50.883 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:50.883 Found net devices under 0000:af:00.0: cvl_0_0 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:50.883 Found net devices under 0000:af:00.1: cvl_0_1 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.883 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:23:51.142 00:23:51.142 --- 10.0.0.2 ping statistics --- 00:23:51.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.142 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:23:51.142 00:23:51.142 --- 10.0.0.1 ping statistics --- 00:23:51.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.142 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=128973 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 128973 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 128973 ']' 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.142 20:22:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.142 [2024-07-15 20:22:16.480378] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:23:51.142 [2024-07-15 20:22:16.480435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.400 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.400 [2024-07-15 20:22:16.559641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.400 [2024-07-15 20:22:16.647630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.400 [2024-07-15 20:22:16.647670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.400 [2024-07-15 20:22:16.647680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.400 [2024-07-15 20:22:16.647689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.400 [2024-07-15 20:22:16.647696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.400 [2024-07-15 20:22:16.647717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 [2024-07-15 20:22:17.413246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 null0 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 86246063a7a241fe9942f3c1aeb2f433 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.328 [2024-07-15 20:22:17.453460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:52.328 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.329 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.600 nvme0n1 00:23:52.600 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.600 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:52.600 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.600 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.600 [ 00:23:52.600 { 00:23:52.600 "name": "nvme0n1", 00:23:52.600 "aliases": [ 00:23:52.601 "86246063-a7a2-41fe-9942-f3c1aeb2f433" 00:23:52.601 ], 00:23:52.601 "product_name": "NVMe disk", 00:23:52.601 "block_size": 512, 00:23:52.601 "num_blocks": 2097152, 00:23:52.601 "uuid": "86246063-a7a2-41fe-9942-f3c1aeb2f433", 00:23:52.601 "assigned_rate_limits": { 00:23:52.601 "rw_ios_per_sec": 0, 00:23:52.601 "rw_mbytes_per_sec": 0, 00:23:52.601 "r_mbytes_per_sec": 0, 00:23:52.601 "w_mbytes_per_sec": 0 00:23:52.601 }, 00:23:52.601 "claimed": false, 00:23:52.601 "zoned": false, 00:23:52.601 "supported_io_types": { 00:23:52.601 "read": true, 00:23:52.601 "write": true, 00:23:52.601 "unmap": false, 00:23:52.601 "flush": true, 00:23:52.601 "reset": true, 00:23:52.601 "nvme_admin": true, 00:23:52.601 "nvme_io": true, 00:23:52.601 "nvme_io_md": false, 00:23:52.601 "write_zeroes": true, 00:23:52.601 "zcopy": false, 00:23:52.601 "get_zone_info": false, 00:23:52.601 "zone_management": false, 00:23:52.601 "zone_append": false, 00:23:52.601 "compare": true, 00:23:52.601 "compare_and_write": true, 00:23:52.601 "abort": true, 00:23:52.601 "seek_hole": false, 00:23:52.601 "seek_data": false, 00:23:52.601 "copy": true, 00:23:52.601 "nvme_iov_md": false 00:23:52.601 }, 00:23:52.601 "memory_domains": [ 00:23:52.601 { 00:23:52.601 "dma_device_id": "system", 00:23:52.601 "dma_device_type": 1 00:23:52.601 } 00:23:52.601 ], 00:23:52.601 "driver_specific": { 00:23:52.601 "nvme": [ 00:23:52.601 { 00:23:52.601 "trid": { 00:23:52.601 "trtype": "TCP", 00:23:52.601 "adrfam": "IPv4", 00:23:52.601 "traddr": "10.0.0.2", 00:23:52.601 "trsvcid": "4420", 00:23:52.601 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:52.601 }, 00:23:52.601 "ctrlr_data": { 00:23:52.601 "cntlid": 1, 00:23:52.601 "vendor_id": "0x8086", 00:23:52.601 "model_number": "SPDK bdev Controller", 00:23:52.601 "serial_number": "00000000000000000000", 00:23:52.601 "firmware_revision": "24.09", 00:23:52.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.601 "oacs": { 00:23:52.601 "security": 0, 00:23:52.601 "format": 0, 00:23:52.601 "firmware": 0, 00:23:52.601 "ns_manage": 0 00:23:52.601 }, 00:23:52.601 "multi_ctrlr": true, 00:23:52.601 "ana_reporting": false 00:23:52.601 }, 00:23:52.601 "vs": { 00:23:52.601 "nvme_version": "1.3" 00:23:52.601 }, 00:23:52.601 "ns_data": { 00:23:52.601 "id": 1, 00:23:52.601 "can_share": true 00:23:52.601 } 00:23:52.601 } 00:23:52.601 ], 00:23:52.601 "mp_policy": "active_passive" 00:23:52.601 } 00:23:52.601 } 00:23:52.601 ] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 [2024-07-15 20:22:17.709998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:52.601 [2024-07-15 20:22:17.710070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d1b50 (9): Bad file descriptor 00:23:52.601 [2024-07-15 20:22:17.842376] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 [ 00:23:52.601 { 00:23:52.601 "name": "nvme0n1", 00:23:52.601 "aliases": [ 00:23:52.601 "86246063-a7a2-41fe-9942-f3c1aeb2f433" 00:23:52.601 ], 00:23:52.601 "product_name": "NVMe disk", 00:23:52.601 "block_size": 512, 00:23:52.601 "num_blocks": 2097152, 00:23:52.601 "uuid": "86246063-a7a2-41fe-9942-f3c1aeb2f433", 00:23:52.601 "assigned_rate_limits": { 00:23:52.601 "rw_ios_per_sec": 0, 00:23:52.601 "rw_mbytes_per_sec": 0, 00:23:52.601 "r_mbytes_per_sec": 0, 00:23:52.601 "w_mbytes_per_sec": 0 00:23:52.601 }, 00:23:52.601 "claimed": false, 00:23:52.601 "zoned": false, 00:23:52.601 "supported_io_types": { 00:23:52.601 "read": true, 00:23:52.601 "write": true, 00:23:52.601 "unmap": false, 00:23:52.601 "flush": true, 00:23:52.601 "reset": true, 00:23:52.601 "nvme_admin": true, 00:23:52.601 "nvme_io": true, 00:23:52.601 "nvme_io_md": false, 00:23:52.601 "write_zeroes": true, 00:23:52.601 "zcopy": false, 00:23:52.601 "get_zone_info": false, 00:23:52.601 "zone_management": false, 00:23:52.601 "zone_append": false, 00:23:52.601 "compare": true, 00:23:52.601 "compare_and_write": true, 00:23:52.601 "abort": true, 00:23:52.601 "seek_hole": false, 00:23:52.601 "seek_data": false, 00:23:52.601 "copy": true, 00:23:52.601 "nvme_iov_md": false 00:23:52.601 }, 00:23:52.601 "memory_domains": [ 00:23:52.601 { 00:23:52.601 "dma_device_id": "system", 00:23:52.601 "dma_device_type": 1 00:23:52.601 } 00:23:52.601 ], 00:23:52.601 "driver_specific": { 00:23:52.601 "nvme": [ 00:23:52.601 { 00:23:52.601 "trid": { 00:23:52.601 "trtype": "TCP", 00:23:52.601 "adrfam": "IPv4", 00:23:52.601 "traddr": "10.0.0.2", 00:23:52.601 "trsvcid": "4420", 00:23:52.601 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:52.601 }, 00:23:52.601 "ctrlr_data": { 00:23:52.601 "cntlid": 2, 00:23:52.601 "vendor_id": "0x8086", 00:23:52.601 "model_number": "SPDK bdev Controller", 00:23:52.601 "serial_number": "00000000000000000000", 00:23:52.601 "firmware_revision": "24.09", 00:23:52.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.601 "oacs": { 00:23:52.601 "security": 0, 00:23:52.601 "format": 0, 00:23:52.601 "firmware": 0, 00:23:52.601 "ns_manage": 0 00:23:52.601 }, 00:23:52.601 "multi_ctrlr": true, 00:23:52.601 "ana_reporting": false 00:23:52.601 }, 00:23:52.601 "vs": { 00:23:52.601 "nvme_version": "1.3" 00:23:52.601 }, 00:23:52.601 "ns_data": { 00:23:52.601 "id": 1, 00:23:52.601 "can_share": true 00:23:52.601 } 00:23:52.601 } 00:23:52.601 ], 00:23:52.601 "mp_policy": "active_passive" 00:23:52.601 } 00:23:52.601 } 00:23:52.601 ] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.CwYuE3axgw 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.CwYuE3axgw 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 [2024-07-15 20:22:17.906662] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.601 [2024-07-15 20:22:17.906798] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CwYuE3axgw 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 [2024-07-15 20:22:17.914674] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.CwYuE3axgw 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.601 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.601 [2024-07-15 20:22:17.926735] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.601 [2024-07-15 20:22:17.926780] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:52.858 nvme0n1 00:23:52.858 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.858 20:22:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:52.858 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.858 20:22:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.858 [ 00:23:52.858 { 00:23:52.858 "name": "nvme0n1", 00:23:52.858 "aliases": [ 00:23:52.858 "86246063-a7a2-41fe-9942-f3c1aeb2f433" 00:23:52.858 ], 00:23:52.858 "product_name": "NVMe disk", 00:23:52.858 "block_size": 512, 00:23:52.858 "num_blocks": 2097152, 00:23:52.858 "uuid": "86246063-a7a2-41fe-9942-f3c1aeb2f433", 00:23:52.858 "assigned_rate_limits": { 00:23:52.858 "rw_ios_per_sec": 0, 00:23:52.858 "rw_mbytes_per_sec": 0, 00:23:52.858 "r_mbytes_per_sec": 0, 00:23:52.858 "w_mbytes_per_sec": 0 00:23:52.858 }, 00:23:52.858 "claimed": false, 00:23:52.858 "zoned": false, 00:23:52.858 "supported_io_types": { 00:23:52.858 "read": true, 00:23:52.858 "write": true, 00:23:52.858 "unmap": false, 00:23:52.858 "flush": true, 00:23:52.858 "reset": true, 00:23:52.858 "nvme_admin": true, 00:23:52.858 "nvme_io": true, 00:23:52.858 "nvme_io_md": false, 00:23:52.858 "write_zeroes": true, 00:23:52.858 "zcopy": false, 00:23:52.858 "get_zone_info": false, 00:23:52.858 "zone_management": false, 00:23:52.858 "zone_append": false, 00:23:52.858 "compare": true, 00:23:52.858 "compare_and_write": true, 00:23:52.858 "abort": true, 00:23:52.858 "seek_hole": false, 00:23:52.858 "seek_data": false, 00:23:52.858 "copy": true, 00:23:52.858 "nvme_iov_md": false 00:23:52.858 }, 00:23:52.858 "memory_domains": [ 00:23:52.858 { 00:23:52.858 "dma_device_id": "system", 00:23:52.858 "dma_device_type": 1 00:23:52.858 } 00:23:52.858 ], 00:23:52.858 "driver_specific": { 00:23:52.858 "nvme": [ 00:23:52.858 { 00:23:52.858 "trid": { 00:23:52.858 "trtype": "TCP", 00:23:52.858 "adrfam": "IPv4", 00:23:52.858 "traddr": "10.0.0.2", 00:23:52.858 "trsvcid": "4421", 00:23:52.858 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:52.858 }, 00:23:52.858 "ctrlr_data": { 00:23:52.858 "cntlid": 3, 00:23:52.858 "vendor_id": "0x8086", 00:23:52.858 "model_number": "SPDK bdev Controller", 00:23:52.858 "serial_number": "00000000000000000000", 00:23:52.858 "firmware_revision": "24.09", 00:23:52.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.858 "oacs": { 00:23:52.858 "security": 0, 00:23:52.858 "format": 0, 00:23:52.858 "firmware": 0, 00:23:52.858 "ns_manage": 0 00:23:52.858 }, 00:23:52.858 "multi_ctrlr": true, 00:23:52.858 "ana_reporting": false 00:23:52.858 }, 00:23:52.858 "vs": { 00:23:52.858 "nvme_version": "1.3" 00:23:52.858 }, 00:23:52.858 "ns_data": { 00:23:52.858 "id": 1, 00:23:52.858 "can_share": true 00:23:52.858 } 00:23:52.858 } 00:23:52.858 ], 00:23:52.858 "mp_policy": "active_passive" 00:23:52.858 } 00:23:52.858 } 00:23:52.858 ] 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.CwYuE3axgw 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.858 rmmod nvme_tcp 00:23:52.858 rmmod nvme_fabrics 00:23:52.858 rmmod nvme_keyring 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 128973 ']' 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 128973 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 128973 ']' 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 128973 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128973 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128973' 00:23:52.858 killing process with pid 128973 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 128973 00:23:52.858 [2024-07-15 20:22:18.143737] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:52.858 [2024-07-15 20:22:18.143765] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:52.858 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 128973 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.115 20:22:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.131 20:22:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.131 00:23:55.131 real 0m9.917s 00:23:55.131 user 0m3.772s 00:23:55.131 sys 0m4.757s 00:23:55.131 20:22:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.131 20:22:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:55.131 ************************************ 00:23:55.131 END TEST nvmf_async_init 00:23:55.131 ************************************ 00:23:55.131 20:22:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:55.131 20:22:20 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:55.131 20:22:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.131 20:22:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.131 20:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.131 ************************************ 00:23:55.131 START TEST dma 00:23:55.131 ************************************ 00:23:55.131 20:22:20 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:55.390 * Looking for test storage... 00:23:55.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.390 20:22:20 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.390 20:22:20 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.390 20:22:20 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.390 20:22:20 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.390 20:22:20 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.390 20:22:20 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.390 20:22:20 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.390 20:22:20 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:55.390 20:22:20 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.390 20:22:20 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.390 20:22:20 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:55.390 20:22:20 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:55.390 00:23:55.390 real 0m0.116s 00:23:55.390 user 0m0.059s 00:23:55.390 sys 0m0.064s 00:23:55.390 20:22:20 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.390 20:22:20 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 ************************************ 00:23:55.390 END TEST dma 00:23:55.390 ************************************ 00:23:55.390 20:22:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:55.390 20:22:20 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:55.390 20:22:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.390 20:22:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.390 20:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.390 ************************************ 00:23:55.390 START TEST nvmf_identify 00:23:55.390 ************************************ 00:23:55.390 20:22:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:55.390 * Looking for test storage... 00:23:55.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.649 20:22:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:00.914 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:00.914 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:00.914 Found net devices under 0000:af:00.0: cvl_0_0 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.914 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:00.915 Found net devices under 0000:af:00.1: cvl_0_1 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:24:00.915 00:24:00.915 --- 10.0.0.2 ping statistics --- 00:24:00.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.915 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:24:00.915 00:24:00.915 --- 10.0.0.1 ping statistics --- 00:24:00.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.915 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=132835 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 132835 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 132835 ']' 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:00.915 20:22:25 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.915 [2024-07-15 20:22:25.897895] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:24:00.915 [2024-07-15 20:22:25.897953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.915 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.915 [2024-07-15 20:22:25.983730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.915 [2024-07-15 20:22:26.076738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.915 [2024-07-15 20:22:26.076783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.915 [2024-07-15 20:22:26.076794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.915 [2024-07-15 20:22:26.076802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.915 [2024-07-15 20:22:26.076809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.915 [2024-07-15 20:22:26.076867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.915 [2024-07-15 20:22:26.076889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.915 [2024-07-15 20:22:26.076985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.915 [2024-07-15 20:22:26.076988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.849 [2024-07-15 20:22:26.855781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.849 Malloc0 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.849 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 [2024-07-15 20:22:26.943661] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 [ 00:24:01.850 { 00:24:01.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:01.850 "subtype": "Discovery", 00:24:01.850 "listen_addresses": [ 00:24:01.850 { 00:24:01.850 "trtype": "TCP", 00:24:01.850 "adrfam": "IPv4", 00:24:01.850 "traddr": "10.0.0.2", 00:24:01.850 "trsvcid": "4420" 00:24:01.850 } 00:24:01.850 ], 00:24:01.850 "allow_any_host": true, 00:24:01.850 "hosts": [] 00:24:01.850 }, 00:24:01.850 { 00:24:01.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.850 "subtype": "NVMe", 00:24:01.850 "listen_addresses": [ 00:24:01.850 { 00:24:01.850 "trtype": "TCP", 00:24:01.850 "adrfam": "IPv4", 00:24:01.850 "traddr": "10.0.0.2", 00:24:01.850 "trsvcid": "4420" 00:24:01.850 } 00:24:01.850 ], 00:24:01.850 "allow_any_host": true, 00:24:01.850 "hosts": [], 00:24:01.850 "serial_number": "SPDK00000000000001", 00:24:01.850 "model_number": "SPDK bdev Controller", 00:24:01.850 "max_namespaces": 32, 00:24:01.850 "min_cntlid": 1, 00:24:01.850 "max_cntlid": 65519, 00:24:01.850 "namespaces": [ 00:24:01.850 { 00:24:01.850 "nsid": 1, 00:24:01.850 "bdev_name": "Malloc0", 00:24:01.850 "name": "Malloc0", 00:24:01.850 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:01.850 "eui64": "ABCDEF0123456789", 00:24:01.850 "uuid": "defc1273-5d50-4854-b9f8-102c344dbd0e" 00:24:01.850 } 00:24:01.850 ] 00:24:01.850 } 00:24:01.850 ] 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.850 20:22:26 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:01.850 [2024-07-15 20:22:26.994864] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:24:01.850 [2024-07-15 20:22:26.994901] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133050 ] 00:24:01.850 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.850 [2024-07-15 20:22:27.031779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:01.850 [2024-07-15 20:22:27.031840] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:01.850 [2024-07-15 20:22:27.031847] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:01.850 [2024-07-15 20:22:27.031863] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:01.850 [2024-07-15 20:22:27.031871] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:01.850 [2024-07-15 20:22:27.032249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:01.850 [2024-07-15 20:22:27.032299] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb72ec0 0 00:24:01.850 [2024-07-15 20:22:27.046269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:01.850 [2024-07-15 20:22:27.046286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:01.850 [2024-07-15 20:22:27.046292] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:01.850 [2024-07-15 20:22:27.046297] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:01.850 [2024-07-15 20:22:27.046344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.046351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.046357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.850 [2024-07-15 20:22:27.046371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:01.850 [2024-07-15 20:22:27.046391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.850 [2024-07-15 20:22:27.053267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.850 [2024-07-15 20:22:27.053279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.850 [2024-07-15 20:22:27.053284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.850 [2024-07-15 20:22:27.053304] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:01.850 [2024-07-15 20:22:27.053313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:01.850 [2024-07-15 20:22:27.053320] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:01.850 [2024-07-15 20:22:27.053336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.850 [2024-07-15 20:22:27.053356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.850 [2024-07-15 20:22:27.053374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.850 [2024-07-15 20:22:27.053578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.850 [2024-07-15 20:22:27.053587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.850 [2024-07-15 20:22:27.053591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.850 [2024-07-15 20:22:27.053602] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:01.850 [2024-07-15 20:22:27.053612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:01.850 [2024-07-15 20:22:27.053620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.850 [2024-07-15 20:22:27.053639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.850 [2024-07-15 20:22:27.053656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.850 [2024-07-15 20:22:27.053771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.850 [2024-07-15 20:22:27.053779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.850 [2024-07-15 20:22:27.053784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.850 [2024-07-15 20:22:27.053794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:01.850 [2024-07-15 20:22:27.053805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:01.850 [2024-07-15 20:22:27.053813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.850 [2024-07-15 20:22:27.053831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.850 [2024-07-15 20:22:27.053844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.850 [2024-07-15 20:22:27.053940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.850 [2024-07-15 20:22:27.053949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.850 [2024-07-15 20:22:27.053953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.850 [2024-07-15 20:22:27.053964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:01.850 [2024-07-15 20:22:27.053976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.053986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.850 [2024-07-15 20:22:27.053995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.850 [2024-07-15 20:22:27.054007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.850 [2024-07-15 20:22:27.054095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.850 [2024-07-15 20:22:27.054104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.850 [2024-07-15 20:22:27.054108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.850 [2024-07-15 20:22:27.054113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.850 [2024-07-15 20:22:27.054118] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:01.850 [2024-07-15 20:22:27.054124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:01.850 [2024-07-15 20:22:27.054135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:01.850 [2024-07-15 20:22:27.054241] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:01.851 [2024-07-15 20:22:27.054248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:01.851 [2024-07-15 20:22:27.054265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054270] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.054286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.851 [2024-07-15 20:22:27.054300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.851 [2024-07-15 20:22:27.054390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.851 [2024-07-15 20:22:27.054398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.851 [2024-07-15 20:22:27.054403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.851 [2024-07-15 20:22:27.054413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:01.851 [2024-07-15 20:22:27.054425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.054443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.851 [2024-07-15 20:22:27.054456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.851 [2024-07-15 20:22:27.054549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.851 [2024-07-15 20:22:27.054557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.851 [2024-07-15 20:22:27.054561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.851 [2024-07-15 20:22:27.054571] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:01.851 [2024-07-15 20:22:27.054578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:01.851 [2024-07-15 20:22:27.054587] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:01.851 [2024-07-15 20:22:27.054602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:01.851 [2024-07-15 20:22:27.054614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.054628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.851 [2024-07-15 20:22:27.054641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.851 [2024-07-15 20:22:27.054768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.851 [2024-07-15 20:22:27.054777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.851 [2024-07-15 20:22:27.054782] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054787] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb72ec0): datao=0, datal=4096, cccid=0 00:24:01.851 [2024-07-15 20:22:27.054792] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbf5e40) on tqpair(0xb72ec0): expected_datao=0, payload_size=4096 00:24:01.851 [2024-07-15 20:22:27.054798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054807] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.054812] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.851 [2024-07-15 20:22:27.099279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.851 [2024-07-15 20:22:27.099284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.851 [2024-07-15 20:22:27.099299] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:01.851 [2024-07-15 20:22:27.099309] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:01.851 [2024-07-15 20:22:27.099315] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:01.851 [2024-07-15 20:22:27.099322] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:01.851 [2024-07-15 20:22:27.099328] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:01.851 [2024-07-15 20:22:27.099334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:01.851 [2024-07-15 20:22:27.099346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:01.851 [2024-07-15 20:22:27.099355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.099376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:01.851 [2024-07-15 20:22:27.099393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.851 [2024-07-15 20:22:27.099598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.851 [2024-07-15 20:22:27.099606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.851 [2024-07-15 20:22:27.099611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:01.851 [2024-07-15 20:22:27.099625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.099642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.851 [2024-07-15 20:22:27.099650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.099666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.851 [2024-07-15 20:22:27.099674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.099690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.851 [2024-07-15 20:22:27.099697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.099717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.851 [2024-07-15 20:22:27.099723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:01.851 [2024-07-15 20:22:27.099737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:01.851 [2024-07-15 20:22:27.099745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.099759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.851 [2024-07-15 20:22:27.099775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5e40, cid 0, qid 0 00:24:01.851 [2024-07-15 20:22:27.099782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf5fc0, cid 1, qid 0 00:24:01.851 [2024-07-15 20:22:27.099788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf6140, cid 2, qid 0 00:24:01.851 [2024-07-15 20:22:27.099793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:01.851 [2024-07-15 20:22:27.099799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf6440, cid 4, qid 0 00:24:01.851 [2024-07-15 20:22:27.099946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.851 [2024-07-15 20:22:27.099954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.851 [2024-07-15 20:22:27.099959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf6440) on tqpair=0xb72ec0 00:24:01.851 [2024-07-15 20:22:27.099971] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:01.851 [2024-07-15 20:22:27.099977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:01.851 [2024-07-15 20:22:27.099991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.099996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb72ec0) 00:24:01.851 [2024-07-15 20:22:27.100004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.851 [2024-07-15 20:22:27.100018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf6440, cid 4, qid 0 00:24:01.851 [2024-07-15 20:22:27.100123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.851 [2024-07-15 20:22:27.100131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.851 [2024-07-15 20:22:27.100136] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.100141] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb72ec0): datao=0, datal=4096, cccid=4 00:24:01.851 [2024-07-15 20:22:27.100146] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbf6440) on tqpair(0xb72ec0): expected_datao=0, payload_size=4096 00:24:01.851 [2024-07-15 20:22:27.100152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.100183] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.100189] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.100234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.851 [2024-07-15 20:22:27.100243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.851 [2024-07-15 20:22:27.100247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.100252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf6440) on tqpair=0xb72ec0 00:24:01.851 [2024-07-15 20:22:27.100286] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:01.851 [2024-07-15 20:22:27.100314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.851 [2024-07-15 20:22:27.100320] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb72ec0) 00:24:01.852 [2024-07-15 20:22:27.100328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.852 [2024-07-15 20:22:27.100336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb72ec0) 00:24:01.852 [2024-07-15 20:22:27.100354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:01.852 [2024-07-15 20:22:27.100372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf6440, cid 4, qid 0 00:24:01.852 [2024-07-15 20:22:27.100379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf65c0, cid 5, qid 0 00:24:01.852 [2024-07-15 20:22:27.100528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.852 [2024-07-15 20:22:27.100537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.852 [2024-07-15 20:22:27.100541] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100546] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb72ec0): datao=0, datal=1024, cccid=4 00:24:01.852 [2024-07-15 20:22:27.100552] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbf6440) on tqpair(0xb72ec0): expected_datao=0, payload_size=1024 00:24:01.852 [2024-07-15 20:22:27.100557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100566] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100571] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.852 [2024-07-15 20:22:27.100585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.852 [2024-07-15 20:22:27.100589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.100594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf65c0) on tqpair=0xb72ec0 00:24:01.852 [2024-07-15 20:22:27.141490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.852 [2024-07-15 20:22:27.141504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.852 [2024-07-15 20:22:27.141508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.141514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf6440) on tqpair=0xb72ec0 00:24:01.852 [2024-07-15 20:22:27.141533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.141539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb72ec0) 00:24:01.852 [2024-07-15 20:22:27.141549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.852 [2024-07-15 20:22:27.141570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf6440, cid 4, qid 0 00:24:01.852 [2024-07-15 20:22:27.141678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.852 [2024-07-15 20:22:27.141687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.852 [2024-07-15 20:22:27.141691] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.141696] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb72ec0): datao=0, datal=3072, cccid=4 00:24:01.852 [2024-07-15 20:22:27.141702] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbf6440) on tqpair(0xb72ec0): expected_datao=0, payload_size=3072 00:24:01.852 [2024-07-15 20:22:27.141713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.141750] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.141755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.182494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:01.852 [2024-07-15 20:22:27.182506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:01.852 [2024-07-15 20:22:27.182511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.182515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf6440) on tqpair=0xb72ec0 00:24:01.852 [2024-07-15 20:22:27.182528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.182533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb72ec0) 00:24:01.852 [2024-07-15 20:22:27.182543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.852 [2024-07-15 20:22:27.182563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf6440, cid 4, qid 0 00:24:01.852 [2024-07-15 20:22:27.182678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:01.852 [2024-07-15 20:22:27.182686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:01.852 [2024-07-15 20:22:27.182691] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.182695] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb72ec0): datao=0, datal=8, cccid=4 00:24:01.852 [2024-07-15 20:22:27.182701] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbf6440) on tqpair(0xb72ec0): expected_datao=0, payload_size=8 00:24:01.852 [2024-07-15 20:22:27.182706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.182715] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:01.852 [2024-07-15 20:22:27.182719] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.114 [2024-07-15 20:22:27.225273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.114 [2024-07-15 20:22:27.225294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.114 [2024-07-15 20:22:27.225300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.114 [2024-07-15 20:22:27.225305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf6440) on tqpair=0xb72ec0 00:24:02.114 ===================================================== 00:24:02.114 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:02.114 ===================================================== 00:24:02.114 Controller Capabilities/Features 00:24:02.114 ================================ 00:24:02.114 Vendor ID: 0000 00:24:02.114 Subsystem Vendor ID: 0000 00:24:02.114 Serial Number: .................... 00:24:02.114 Model Number: ........................................ 00:24:02.114 Firmware Version: 24.09 00:24:02.114 Recommended Arb Burst: 0 00:24:02.114 IEEE OUI Identifier: 00 00 00 00:24:02.114 Multi-path I/O 00:24:02.114 May have multiple subsystem ports: No 00:24:02.114 May have multiple controllers: No 00:24:02.114 Associated with SR-IOV VF: No 00:24:02.114 Max Data Transfer Size: 131072 00:24:02.114 Max Number of Namespaces: 0 00:24:02.114 Max Number of I/O Queues: 1024 00:24:02.114 NVMe Specification Version (VS): 1.3 00:24:02.114 NVMe Specification Version (Identify): 1.3 00:24:02.114 Maximum Queue Entries: 128 00:24:02.114 Contiguous Queues Required: Yes 00:24:02.114 Arbitration Mechanisms Supported 00:24:02.114 Weighted Round Robin: Not Supported 00:24:02.114 Vendor Specific: Not Supported 00:24:02.114 Reset Timeout: 15000 ms 00:24:02.114 Doorbell Stride: 4 bytes 00:24:02.114 NVM Subsystem Reset: Not Supported 00:24:02.114 Command Sets Supported 00:24:02.114 NVM Command Set: Supported 00:24:02.114 Boot Partition: Not Supported 00:24:02.114 Memory Page Size Minimum: 4096 bytes 00:24:02.114 Memory Page Size Maximum: 4096 bytes 00:24:02.114 Persistent Memory Region: Not Supported 00:24:02.114 Optional Asynchronous Events Supported 00:24:02.114 Namespace Attribute Notices: Not Supported 00:24:02.114 Firmware Activation Notices: Not Supported 00:24:02.114 ANA Change Notices: Not Supported 00:24:02.114 PLE Aggregate Log Change Notices: Not Supported 00:24:02.114 LBA Status Info Alert Notices: Not Supported 00:24:02.114 EGE Aggregate Log Change Notices: Not Supported 00:24:02.114 Normal NVM Subsystem Shutdown event: Not Supported 00:24:02.114 Zone Descriptor Change Notices: Not Supported 00:24:02.114 Discovery Log Change Notices: Supported 00:24:02.114 Controller Attributes 00:24:02.114 128-bit Host Identifier: Not Supported 00:24:02.114 Non-Operational Permissive Mode: Not Supported 00:24:02.114 NVM Sets: Not Supported 00:24:02.115 Read Recovery Levels: Not Supported 00:24:02.115 Endurance Groups: Not Supported 00:24:02.115 Predictable Latency Mode: Not Supported 00:24:02.115 Traffic Based Keep ALive: Not Supported 00:24:02.115 Namespace Granularity: Not Supported 00:24:02.115 SQ Associations: Not Supported 00:24:02.115 UUID List: Not Supported 00:24:02.115 Multi-Domain Subsystem: Not Supported 00:24:02.115 Fixed Capacity Management: Not Supported 00:24:02.115 Variable Capacity Management: Not Supported 00:24:02.115 Delete Endurance Group: Not Supported 00:24:02.115 Delete NVM Set: Not Supported 00:24:02.115 Extended LBA Formats Supported: Not Supported 00:24:02.115 Flexible Data Placement Supported: Not Supported 00:24:02.115 00:24:02.115 Controller Memory Buffer Support 00:24:02.115 ================================ 00:24:02.115 Supported: No 00:24:02.115 00:24:02.115 Persistent Memory Region Support 00:24:02.115 ================================ 00:24:02.115 Supported: No 00:24:02.115 00:24:02.115 Admin Command Set Attributes 00:24:02.115 ============================ 00:24:02.115 Security Send/Receive: Not Supported 00:24:02.115 Format NVM: Not Supported 00:24:02.115 Firmware Activate/Download: Not Supported 00:24:02.115 Namespace Management: Not Supported 00:24:02.115 Device Self-Test: Not Supported 00:24:02.115 Directives: Not Supported 00:24:02.115 NVMe-MI: Not Supported 00:24:02.115 Virtualization Management: Not Supported 00:24:02.115 Doorbell Buffer Config: Not Supported 00:24:02.115 Get LBA Status Capability: Not Supported 00:24:02.115 Command & Feature Lockdown Capability: Not Supported 00:24:02.115 Abort Command Limit: 1 00:24:02.115 Async Event Request Limit: 4 00:24:02.115 Number of Firmware Slots: N/A 00:24:02.115 Firmware Slot 1 Read-Only: N/A 00:24:02.115 Firmware Activation Without Reset: N/A 00:24:02.115 Multiple Update Detection Support: N/A 00:24:02.115 Firmware Update Granularity: No Information Provided 00:24:02.115 Per-Namespace SMART Log: No 00:24:02.115 Asymmetric Namespace Access Log Page: Not Supported 00:24:02.115 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:02.115 Command Effects Log Page: Not Supported 00:24:02.115 Get Log Page Extended Data: Supported 00:24:02.115 Telemetry Log Pages: Not Supported 00:24:02.115 Persistent Event Log Pages: Not Supported 00:24:02.115 Supported Log Pages Log Page: May Support 00:24:02.115 Commands Supported & Effects Log Page: Not Supported 00:24:02.115 Feature Identifiers & Effects Log Page:May Support 00:24:02.115 NVMe-MI Commands & Effects Log Page: May Support 00:24:02.115 Data Area 4 for Telemetry Log: Not Supported 00:24:02.115 Error Log Page Entries Supported: 128 00:24:02.115 Keep Alive: Not Supported 00:24:02.115 00:24:02.115 NVM Command Set Attributes 00:24:02.115 ========================== 00:24:02.115 Submission Queue Entry Size 00:24:02.115 Max: 1 00:24:02.115 Min: 1 00:24:02.115 Completion Queue Entry Size 00:24:02.115 Max: 1 00:24:02.115 Min: 1 00:24:02.115 Number of Namespaces: 0 00:24:02.115 Compare Command: Not Supported 00:24:02.115 Write Uncorrectable Command: Not Supported 00:24:02.115 Dataset Management Command: Not Supported 00:24:02.115 Write Zeroes Command: Not Supported 00:24:02.115 Set Features Save Field: Not Supported 00:24:02.115 Reservations: Not Supported 00:24:02.115 Timestamp: Not Supported 00:24:02.115 Copy: Not Supported 00:24:02.115 Volatile Write Cache: Not Present 00:24:02.115 Atomic Write Unit (Normal): 1 00:24:02.115 Atomic Write Unit (PFail): 1 00:24:02.115 Atomic Compare & Write Unit: 1 00:24:02.115 Fused Compare & Write: Supported 00:24:02.115 Scatter-Gather List 00:24:02.115 SGL Command Set: Supported 00:24:02.115 SGL Keyed: Supported 00:24:02.115 SGL Bit Bucket Descriptor: Not Supported 00:24:02.115 SGL Metadata Pointer: Not Supported 00:24:02.115 Oversized SGL: Not Supported 00:24:02.115 SGL Metadata Address: Not Supported 00:24:02.115 SGL Offset: Supported 00:24:02.115 Transport SGL Data Block: Not Supported 00:24:02.115 Replay Protected Memory Block: Not Supported 00:24:02.115 00:24:02.115 Firmware Slot Information 00:24:02.115 ========================= 00:24:02.115 Active slot: 0 00:24:02.115 00:24:02.115 00:24:02.115 Error Log 00:24:02.115 ========= 00:24:02.115 00:24:02.115 Active Namespaces 00:24:02.115 ================= 00:24:02.115 Discovery Log Page 00:24:02.115 ================== 00:24:02.115 Generation Counter: 2 00:24:02.115 Number of Records: 2 00:24:02.115 Record Format: 0 00:24:02.115 00:24:02.115 Discovery Log Entry 0 00:24:02.115 ---------------------- 00:24:02.115 Transport Type: 3 (TCP) 00:24:02.115 Address Family: 1 (IPv4) 00:24:02.115 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:02.115 Entry Flags: 00:24:02.115 Duplicate Returned Information: 1 00:24:02.115 Explicit Persistent Connection Support for Discovery: 1 00:24:02.115 Transport Requirements: 00:24:02.115 Secure Channel: Not Required 00:24:02.115 Port ID: 0 (0x0000) 00:24:02.115 Controller ID: 65535 (0xffff) 00:24:02.115 Admin Max SQ Size: 128 00:24:02.115 Transport Service Identifier: 4420 00:24:02.115 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:02.115 Transport Address: 10.0.0.2 00:24:02.115 Discovery Log Entry 1 00:24:02.115 ---------------------- 00:24:02.115 Transport Type: 3 (TCP) 00:24:02.115 Address Family: 1 (IPv4) 00:24:02.115 Subsystem Type: 2 (NVM Subsystem) 00:24:02.115 Entry Flags: 00:24:02.115 Duplicate Returned Information: 0 00:24:02.115 Explicit Persistent Connection Support for Discovery: 0 00:24:02.115 Transport Requirements: 00:24:02.115 Secure Channel: Not Required 00:24:02.115 Port ID: 0 (0x0000) 00:24:02.115 Controller ID: 65535 (0xffff) 00:24:02.115 Admin Max SQ Size: 128 00:24:02.115 Transport Service Identifier: 4420 00:24:02.115 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:02.115 Transport Address: 10.0.0.2 [2024-07-15 20:22:27.225410] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:02.115 [2024-07-15 20:22:27.225424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5e40) on tqpair=0xb72ec0 00:24:02.115 [2024-07-15 20:22:27.225432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.115 [2024-07-15 20:22:27.225440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf5fc0) on tqpair=0xb72ec0 00:24:02.115 [2024-07-15 20:22:27.225446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.115 [2024-07-15 20:22:27.225452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf6140) on tqpair=0xb72ec0 00:24:02.115 [2024-07-15 20:22:27.225458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.115 [2024-07-15 20:22:27.225464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.115 [2024-07-15 20:22:27.225470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.115 [2024-07-15 20:22:27.225483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.115 [2024-07-15 20:22:27.225489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.115 [2024-07-15 20:22:27.225493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.115 [2024-07-15 20:22:27.225505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.115 [2024-07-15 20:22:27.225526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.115 [2024-07-15 20:22:27.225613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.225622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.225626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.225640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.225659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.225678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.225793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.225802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.225806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.225818] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:02.116 [2024-07-15 20:22:27.225824] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:02.116 [2024-07-15 20:22:27.225836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.225855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.225868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.225954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.225962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.225966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.225984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.225994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.226100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.226108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.226113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.226129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226163] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.226246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.226263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.226268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.226285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.226408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.226417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.226421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.226437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.226586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.226594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.226599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.226618] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.226756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.226764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.226769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.226786] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.226906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.226914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.226918] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.226935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.226945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.226953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.226966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.227051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.227059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.227063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.227080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.227098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.227111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.227196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.227205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.227209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.227225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.227244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.227264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.227358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.227366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.227371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.227387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.116 [2024-07-15 20:22:27.227405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.116 [2024-07-15 20:22:27.227421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.116 [2024-07-15 20:22:27.227508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.116 [2024-07-15 20:22:27.227516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.116 [2024-07-15 20:22:27.227520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.116 [2024-07-15 20:22:27.227537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.116 [2024-07-15 20:22:27.227542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.227555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.227568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.227669] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.227676] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.227681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.227698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.227716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.227729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.227818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.227827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.227831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.227848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.227866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.227879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.227968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.227977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.227981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.227986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.227998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.228130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.228139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.228143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.228160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.228289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.228298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.228303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.228320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.228437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.228445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.228449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.228466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.228584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.228592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.228597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.228613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.228757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.228765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.228771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.228788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.228921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.228930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.228935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.228952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.228962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.228970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.228985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.229072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.229081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.229086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.229091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.229103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.229109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.229114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.229123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.229137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.229243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.229251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.233268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.233275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.233289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.233294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.233299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb72ec0) 00:24:02.117 [2024-07-15 20:22:27.233308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.117 [2024-07-15 20:22:27.233323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbf62c0, cid 3, qid 0 00:24:02.117 [2024-07-15 20:22:27.233419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.117 [2024-07-15 20:22:27.233428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.117 [2024-07-15 20:22:27.233436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.117 [2024-07-15 20:22:27.233441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbf62c0) on tqpair=0xb72ec0 00:24:02.117 [2024-07-15 20:22:27.233450] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:02.117 00:24:02.117 20:22:27 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:02.117 [2024-07-15 20:22:27.278570] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:24:02.118 [2024-07-15 20:22:27.278617] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133067 ] 00:24:02.118 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.118 [2024-07-15 20:22:27.315496] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:02.118 [2024-07-15 20:22:27.315547] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:02.118 [2024-07-15 20:22:27.315554] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:02.118 [2024-07-15 20:22:27.315566] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:02.118 [2024-07-15 20:22:27.315574] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:02.118 [2024-07-15 20:22:27.315774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:02.118 [2024-07-15 20:22:27.315806] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xccbec0 0 00:24:02.118 [2024-07-15 20:22:27.327267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:02.118 [2024-07-15 20:22:27.327282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:02.118 [2024-07-15 20:22:27.327287] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:02.118 [2024-07-15 20:22:27.327292] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:02.118 [2024-07-15 20:22:27.327332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.327339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.327344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.327358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:02.118 [2024-07-15 20:22:27.327377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.337269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.337289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.337294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.337310] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:02.118 [2024-07-15 20:22:27.337318] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:02.118 [2024-07-15 20:22:27.337325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:02.118 [2024-07-15 20:22:27.337339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.337363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.337380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.337473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.337482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.337486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.337497] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:02.118 [2024-07-15 20:22:27.337507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:02.118 [2024-07-15 20:22:27.337516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.337534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.337549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.337620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.337629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.337633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.337645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:02.118 [2024-07-15 20:22:27.337655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:02.118 [2024-07-15 20:22:27.337663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.337681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.337695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.337786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.337795] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.337799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.337811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:02.118 [2024-07-15 20:22:27.337823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.337842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.337858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.337927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.337936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.337941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.337945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.337951] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:02.118 [2024-07-15 20:22:27.337957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:02.118 [2024-07-15 20:22:27.337967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:02.118 [2024-07-15 20:22:27.338074] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:02.118 [2024-07-15 20:22:27.338079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:02.118 [2024-07-15 20:22:27.338089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.338108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.338121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.338210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.338218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.338222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.338233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:02.118 [2024-07-15 20:22:27.338245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.338272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.338286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.338411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.338420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.118 [2024-07-15 20:22:27.338424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.118 [2024-07-15 20:22:27.338435] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:02.118 [2024-07-15 20:22:27.338440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:02.118 [2024-07-15 20:22:27.338451] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:02.118 [2024-07-15 20:22:27.338464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:02.118 [2024-07-15 20:22:27.338475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.118 [2024-07-15 20:22:27.338489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.118 [2024-07-15 20:22:27.338503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.118 [2024-07-15 20:22:27.338607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.118 [2024-07-15 20:22:27.338616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.118 [2024-07-15 20:22:27.338621] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338626] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=4096, cccid=0 00:24:02.118 [2024-07-15 20:22:27.338632] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4ee40) on tqpair(0xccbec0): expected_datao=0, payload_size=4096 00:24:02.118 [2024-07-15 20:22:27.338637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338652] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.338657] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.118 [2024-07-15 20:22:27.382266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.118 [2024-07-15 20:22:27.382281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.119 [2024-07-15 20:22:27.382286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.119 [2024-07-15 20:22:27.382301] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:02.119 [2024-07-15 20:22:27.382312] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:02.119 [2024-07-15 20:22:27.382318] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:02.119 [2024-07-15 20:22:27.382323] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:02.119 [2024-07-15 20:22:27.382328] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:02.119 [2024-07-15 20:22:27.382335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:02.119 [2024-07-15 20:22:27.382392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.119 [2024-07-15 20:22:27.382497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.119 [2024-07-15 20:22:27.382505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.119 [2024-07-15 20:22:27.382510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.119 [2024-07-15 20:22:27.382523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.119 [2024-07-15 20:22:27.382551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.119 [2024-07-15 20:22:27.382576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.119 [2024-07-15 20:22:27.382600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.119 [2024-07-15 20:22:27.382623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.119 [2024-07-15 20:22:27.382675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4ee40, cid 0, qid 0 00:24:02.119 [2024-07-15 20:22:27.382682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4efc0, cid 1, qid 0 00:24:02.119 [2024-07-15 20:22:27.382688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f140, cid 2, qid 0 00:24:02.119 [2024-07-15 20:22:27.382694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f2c0, cid 3, qid 0 00:24:02.119 [2024-07-15 20:22:27.382700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.119 [2024-07-15 20:22:27.382795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.119 [2024-07-15 20:22:27.382804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.119 [2024-07-15 20:22:27.382809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.119 [2024-07-15 20:22:27.382819] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:02.119 [2024-07-15 20:22:27.382826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.382854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.382872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:02.119 [2024-07-15 20:22:27.382886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.119 [2024-07-15 20:22:27.382958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.119 [2024-07-15 20:22:27.382967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.119 [2024-07-15 20:22:27.382971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.382977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.119 [2024-07-15 20:22:27.383056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.383068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:02.119 [2024-07-15 20:22:27.383078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.119 [2024-07-15 20:22:27.383083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.119 [2024-07-15 20:22:27.383091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.119 [2024-07-15 20:22:27.383105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.119 [2024-07-15 20:22:27.383216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.119 [2024-07-15 20:22:27.383224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.119 [2024-07-15 20:22:27.383229] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383233] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=4096, cccid=4 00:24:02.120 [2024-07-15 20:22:27.383239] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f440) on tqpair(0xccbec0): expected_datao=0, payload_size=4096 00:24:02.120 [2024-07-15 20:22:27.383244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383253] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.383301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.383306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.383321] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:02.120 [2024-07-15 20:22:27.383336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.383371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.120 [2024-07-15 20:22:27.383388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.120 [2024-07-15 20:22:27.383479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.120 [2024-07-15 20:22:27.383488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.120 [2024-07-15 20:22:27.383493] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383498] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=4096, cccid=4 00:24:02.120 [2024-07-15 20:22:27.383503] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f440) on tqpair(0xccbec0): expected_datao=0, payload_size=4096 00:24:02.120 [2024-07-15 20:22:27.383509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383517] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383522] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.383576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.383581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.383599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383611] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.383634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.120 [2024-07-15 20:22:27.383649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.120 [2024-07-15 20:22:27.383745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.120 [2024-07-15 20:22:27.383753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.120 [2024-07-15 20:22:27.383758] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383763] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=4096, cccid=4 00:24:02.120 [2024-07-15 20:22:27.383768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f440) on tqpair(0xccbec0): expected_datao=0, payload_size=4096 00:24:02.120 [2024-07-15 20:22:27.383774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383782] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383787] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.383831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.383835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.383849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383903] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:02.120 [2024-07-15 20:22:27.383909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:02.120 [2024-07-15 20:22:27.383916] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:02.120 [2024-07-15 20:22:27.383933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.383947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.120 [2024-07-15 20:22:27.383955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.383965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.383973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.120 [2024-07-15 20:22:27.383990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.120 [2024-07-15 20:22:27.383998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f5c0, cid 5, qid 0 00:24:02.120 [2024-07-15 20:22:27.384115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.384124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.384129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.384142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.384149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.384154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f5c0) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.384171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.384184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.120 [2024-07-15 20:22:27.384197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f5c0, cid 5, qid 0 00:24:02.120 [2024-07-15 20:22:27.384291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.384301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.384306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f5c0) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.384322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.384336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.120 [2024-07-15 20:22:27.384353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f5c0, cid 5, qid 0 00:24:02.120 [2024-07-15 20:22:27.384419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.384428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.384432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f5c0) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.384448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xccbec0) 00:24:02.120 [2024-07-15 20:22:27.384462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.120 [2024-07-15 20:22:27.384476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f5c0, cid 5, qid 0 00:24:02.120 [2024-07-15 20:22:27.384598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.120 [2024-07-15 20:22:27.384607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.120 [2024-07-15 20:22:27.384612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f5c0) on tqpair=0xccbec0 00:24:02.120 [2024-07-15 20:22:27.384634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.120 [2024-07-15 20:22:27.384640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.384649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.384658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.384671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.384680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.384693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.384702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.384715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.384730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f5c0, cid 5, qid 0 00:24:02.121 [2024-07-15 20:22:27.384737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f440, cid 4, qid 0 00:24:02.121 [2024-07-15 20:22:27.384744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f740, cid 6, qid 0 00:24:02.121 [2024-07-15 20:22:27.384749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f8c0, cid 7, qid 0 00:24:02.121 [2024-07-15 20:22:27.384877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.121 [2024-07-15 20:22:27.384887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.121 [2024-07-15 20:22:27.384891] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384896] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=8192, cccid=5 00:24:02.121 [2024-07-15 20:22:27.384904] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f5c0) on tqpair(0xccbec0): expected_datao=0, payload_size=8192 00:24:02.121 [2024-07-15 20:22:27.384910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384953] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384959] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.121 [2024-07-15 20:22:27.384973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.121 [2024-07-15 20:22:27.384978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.384982] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=512, cccid=4 00:24:02.121 [2024-07-15 20:22:27.384988] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f440) on tqpair(0xccbec0): expected_datao=0, payload_size=512 00:24:02.121 [2024-07-15 20:22:27.384993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385002] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385006] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.121 [2024-07-15 20:22:27.385021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.121 [2024-07-15 20:22:27.385025] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385029] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=512, cccid=6 00:24:02.121 [2024-07-15 20:22:27.385035] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f740) on tqpair(0xccbec0): expected_datao=0, payload_size=512 00:24:02.121 [2024-07-15 20:22:27.385041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385048] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385053] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:02.121 [2024-07-15 20:22:27.385068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:02.121 [2024-07-15 20:22:27.385072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385076] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccbec0): datao=0, datal=4096, cccid=7 00:24:02.121 [2024-07-15 20:22:27.385082] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd4f8c0) on tqpair(0xccbec0): expected_datao=0, payload_size=4096 00:24:02.121 [2024-07-15 20:22:27.385087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385096] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385100] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f5c0) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385142] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385159] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f440) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f740) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f8c0) on tqpair=0xccbec0 00:24:02.121 ===================================================== 00:24:02.121 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.121 ===================================================== 00:24:02.121 Controller Capabilities/Features 00:24:02.121 ================================ 00:24:02.121 Vendor ID: 8086 00:24:02.121 Subsystem Vendor ID: 8086 00:24:02.121 Serial Number: SPDK00000000000001 00:24:02.121 Model Number: SPDK bdev Controller 00:24:02.121 Firmware Version: 24.09 00:24:02.121 Recommended Arb Burst: 6 00:24:02.121 IEEE OUI Identifier: e4 d2 5c 00:24:02.121 Multi-path I/O 00:24:02.121 May have multiple subsystem ports: Yes 00:24:02.121 May have multiple controllers: Yes 00:24:02.121 Associated with SR-IOV VF: No 00:24:02.121 Max Data Transfer Size: 131072 00:24:02.121 Max Number of Namespaces: 32 00:24:02.121 Max Number of I/O Queues: 127 00:24:02.121 NVMe Specification Version (VS): 1.3 00:24:02.121 NVMe Specification Version (Identify): 1.3 00:24:02.121 Maximum Queue Entries: 128 00:24:02.121 Contiguous Queues Required: Yes 00:24:02.121 Arbitration Mechanisms Supported 00:24:02.121 Weighted Round Robin: Not Supported 00:24:02.121 Vendor Specific: Not Supported 00:24:02.121 Reset Timeout: 15000 ms 00:24:02.121 Doorbell Stride: 4 bytes 00:24:02.121 NVM Subsystem Reset: Not Supported 00:24:02.121 Command Sets Supported 00:24:02.121 NVM Command Set: Supported 00:24:02.121 Boot Partition: Not Supported 00:24:02.121 Memory Page Size Minimum: 4096 bytes 00:24:02.121 Memory Page Size Maximum: 4096 bytes 00:24:02.121 Persistent Memory Region: Not Supported 00:24:02.121 Optional Asynchronous Events Supported 00:24:02.121 Namespace Attribute Notices: Supported 00:24:02.121 Firmware Activation Notices: Not Supported 00:24:02.121 ANA Change Notices: Not Supported 00:24:02.121 PLE Aggregate Log Change Notices: Not Supported 00:24:02.121 LBA Status Info Alert Notices: Not Supported 00:24:02.121 EGE Aggregate Log Change Notices: Not Supported 00:24:02.121 Normal NVM Subsystem Shutdown event: Not Supported 00:24:02.121 Zone Descriptor Change Notices: Not Supported 00:24:02.121 Discovery Log Change Notices: Not Supported 00:24:02.121 Controller Attributes 00:24:02.121 128-bit Host Identifier: Supported 00:24:02.121 Non-Operational Permissive Mode: Not Supported 00:24:02.121 NVM Sets: Not Supported 00:24:02.121 Read Recovery Levels: Not Supported 00:24:02.121 Endurance Groups: Not Supported 00:24:02.121 Predictable Latency Mode: Not Supported 00:24:02.121 Traffic Based Keep ALive: Not Supported 00:24:02.121 Namespace Granularity: Not Supported 00:24:02.121 SQ Associations: Not Supported 00:24:02.121 UUID List: Not Supported 00:24:02.121 Multi-Domain Subsystem: Not Supported 00:24:02.121 Fixed Capacity Management: Not Supported 00:24:02.121 Variable Capacity Management: Not Supported 00:24:02.121 Delete Endurance Group: Not Supported 00:24:02.121 Delete NVM Set: Not Supported 00:24:02.121 Extended LBA Formats Supported: Not Supported 00:24:02.121 Flexible Data Placement Supported: Not Supported 00:24:02.121 00:24:02.121 Controller Memory Buffer Support 00:24:02.121 ================================ 00:24:02.121 Supported: No 00:24:02.121 00:24:02.121 Persistent Memory Region Support 00:24:02.121 ================================ 00:24:02.121 Supported: No 00:24:02.121 00:24:02.121 Admin Command Set Attributes 00:24:02.121 ============================ 00:24:02.121 Security Send/Receive: Not Supported 00:24:02.121 Format NVM: Not Supported 00:24:02.121 Firmware Activate/Download: Not Supported 00:24:02.121 Namespace Management: Not Supported 00:24:02.121 Device Self-Test: Not Supported 00:24:02.121 Directives: Not Supported 00:24:02.121 NVMe-MI: Not Supported 00:24:02.121 Virtualization Management: Not Supported 00:24:02.121 Doorbell Buffer Config: Not Supported 00:24:02.121 Get LBA Status Capability: Not Supported 00:24:02.121 Command & Feature Lockdown Capability: Not Supported 00:24:02.121 Abort Command Limit: 4 00:24:02.121 Async Event Request Limit: 4 00:24:02.121 Number of Firmware Slots: N/A 00:24:02.121 Firmware Slot 1 Read-Only: N/A 00:24:02.121 Firmware Activation Without Reset: N/A 00:24:02.121 Multiple Update Detection Support: N/A 00:24:02.121 Firmware Update Granularity: No Information Provided 00:24:02.121 Per-Namespace SMART Log: No 00:24:02.121 Asymmetric Namespace Access Log Page: Not Supported 00:24:02.121 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:02.121 Command Effects Log Page: Supported 00:24:02.121 Get Log Page Extended Data: Supported 00:24:02.121 Telemetry Log Pages: Not Supported 00:24:02.121 Persistent Event Log Pages: Not Supported 00:24:02.121 Supported Log Pages Log Page: May Support 00:24:02.121 Commands Supported & Effects Log Page: Not Supported 00:24:02.121 Feature Identifiers & Effects Log Page:May Support 00:24:02.121 NVMe-MI Commands & Effects Log Page: May Support 00:24:02.121 Data Area 4 for Telemetry Log: Not Supported 00:24:02.121 Error Log Page Entries Supported: 128 00:24:02.121 Keep Alive: Supported 00:24:02.121 Keep Alive Granularity: 10000 ms 00:24:02.121 00:24:02.121 NVM Command Set Attributes 00:24:02.121 ========================== 00:24:02.121 Submission Queue Entry Size 00:24:02.121 Max: 64 00:24:02.121 Min: 64 00:24:02.121 Completion Queue Entry Size 00:24:02.121 Max: 16 00:24:02.121 Min: 16 00:24:02.121 Number of Namespaces: 32 00:24:02.121 Compare Command: Supported 00:24:02.121 Write Uncorrectable Command: Not Supported 00:24:02.121 Dataset Management Command: Supported 00:24:02.121 Write Zeroes Command: Supported 00:24:02.121 Set Features Save Field: Not Supported 00:24:02.121 Reservations: Supported 00:24:02.121 Timestamp: Not Supported 00:24:02.121 Copy: Supported 00:24:02.121 Volatile Write Cache: Present 00:24:02.121 Atomic Write Unit (Normal): 1 00:24:02.121 Atomic Write Unit (PFail): 1 00:24:02.121 Atomic Compare & Write Unit: 1 00:24:02.121 Fused Compare & Write: Supported 00:24:02.121 Scatter-Gather List 00:24:02.121 SGL Command Set: Supported 00:24:02.121 SGL Keyed: Supported 00:24:02.121 SGL Bit Bucket Descriptor: Not Supported 00:24:02.121 SGL Metadata Pointer: Not Supported 00:24:02.121 Oversized SGL: Not Supported 00:24:02.121 SGL Metadata Address: Not Supported 00:24:02.121 SGL Offset: Supported 00:24:02.121 Transport SGL Data Block: Not Supported 00:24:02.121 Replay Protected Memory Block: Not Supported 00:24:02.121 00:24:02.121 Firmware Slot Information 00:24:02.121 ========================= 00:24:02.121 Active slot: 1 00:24:02.121 Slot 1 Firmware Revision: 24.09 00:24:02.121 00:24:02.121 00:24:02.121 Commands Supported and Effects 00:24:02.121 ============================== 00:24:02.121 Admin Commands 00:24:02.121 -------------- 00:24:02.121 Get Log Page (02h): Supported 00:24:02.121 Identify (06h): Supported 00:24:02.121 Abort (08h): Supported 00:24:02.121 Set Features (09h): Supported 00:24:02.121 Get Features (0Ah): Supported 00:24:02.121 Asynchronous Event Request (0Ch): Supported 00:24:02.121 Keep Alive (18h): Supported 00:24:02.121 I/O Commands 00:24:02.121 ------------ 00:24:02.121 Flush (00h): Supported LBA-Change 00:24:02.121 Write (01h): Supported LBA-Change 00:24:02.121 Read (02h): Supported 00:24:02.121 Compare (05h): Supported 00:24:02.121 Write Zeroes (08h): Supported LBA-Change 00:24:02.121 Dataset Management (09h): Supported LBA-Change 00:24:02.121 Copy (19h): Supported LBA-Change 00:24:02.121 00:24:02.121 Error Log 00:24:02.121 ========= 00:24:02.121 00:24:02.121 Arbitration 00:24:02.121 =========== 00:24:02.121 Arbitration Burst: 1 00:24:02.121 00:24:02.121 Power Management 00:24:02.121 ================ 00:24:02.121 Number of Power States: 1 00:24:02.121 Current Power State: Power State #0 00:24:02.121 Power State #0: 00:24:02.121 Max Power: 0.00 W 00:24:02.121 Non-Operational State: Operational 00:24:02.121 Entry Latency: Not Reported 00:24:02.121 Exit Latency: Not Reported 00:24:02.121 Relative Read Throughput: 0 00:24:02.121 Relative Read Latency: 0 00:24:02.121 Relative Write Throughput: 0 00:24:02.121 Relative Write Latency: 0 00:24:02.121 Idle Power: Not Reported 00:24:02.121 Active Power: Not Reported 00:24:02.121 Non-Operational Permissive Mode: Not Supported 00:24:02.121 00:24:02.121 Health Information 00:24:02.121 ================== 00:24:02.121 Critical Warnings: 00:24:02.121 Available Spare Space: OK 00:24:02.121 Temperature: OK 00:24:02.121 Device Reliability: OK 00:24:02.121 Read Only: No 00:24:02.121 Volatile Memory Backup: OK 00:24:02.121 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:02.121 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:02.121 Available Spare: 0% 00:24:02.121 Available Spare Threshold: 0% 00:24:02.121 Life Percentage Used:[2024-07-15 20:22:27.385340] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.385356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.385373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f8c0, cid 7, qid 0 00:24:02.121 [2024-07-15 20:22:27.385497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f8c0) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385552] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:02.121 [2024-07-15 20:22:27.385565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4ee40) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.121 [2024-07-15 20:22:27.385580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4efc0) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.121 [2024-07-15 20:22:27.385593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f140) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.121 [2024-07-15 20:22:27.385605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f2c0) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.121 [2024-07-15 20:22:27.385621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.385639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.385655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f2c0, cid 3, qid 0 00:24:02.121 [2024-07-15 20:22:27.385746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f2c0) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.385790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.121 [2024-07-15 20:22:27.385810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f2c0, cid 3, qid 0 00:24:02.121 [2024-07-15 20:22:27.385951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.121 [2024-07-15 20:22:27.385959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.121 [2024-07-15 20:22:27.385963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f2c0) on tqpair=0xccbec0 00:24:02.121 [2024-07-15 20:22:27.385974] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:02.121 [2024-07-15 20:22:27.385979] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:02.121 [2024-07-15 20:22:27.385992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.385997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.121 [2024-07-15 20:22:27.386002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccbec0) 00:24:02.121 [2024-07-15 20:22:27.386011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.122 [2024-07-15 20:22:27.386024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f2c0, cid 3, qid 0 00:24:02.122 [2024-07-15 20:22:27.386098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.122 [2024-07-15 20:22:27.386106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.122 [2024-07-15 20:22:27.386111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.386116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f2c0) on tqpair=0xccbec0 00:24:02.122 [2024-07-15 20:22:27.386128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.386133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.386138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccbec0) 00:24:02.122 [2024-07-15 20:22:27.386147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.122 [2024-07-15 20:22:27.386160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f2c0, cid 3, qid 0 00:24:02.122 [2024-07-15 20:22:27.386231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.122 [2024-07-15 20:22:27.386239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.122 [2024-07-15 20:22:27.386244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.386248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f2c0) on tqpair=0xccbec0 00:24:02.122 [2024-07-15 20:22:27.390271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.390281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.390285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccbec0) 00:24:02.122 [2024-07-15 20:22:27.390294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.122 [2024-07-15 20:22:27.390310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd4f2c0, cid 3, qid 0 00:24:02.122 [2024-07-15 20:22:27.390435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:02.122 [2024-07-15 20:22:27.390444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:02.122 [2024-07-15 20:22:27.390448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:02.122 [2024-07-15 20:22:27.390453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd4f2c0) on tqpair=0xccbec0 00:24:02.122 [2024-07-15 20:22:27.390462] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:24:02.122 0% 00:24:02.122 Data Units Read: 0 00:24:02.122 Data Units Written: 0 00:24:02.122 Host Read Commands: 0 00:24:02.122 Host Write Commands: 0 00:24:02.122 Controller Busy Time: 0 minutes 00:24:02.122 Power Cycles: 0 00:24:02.122 Power On Hours: 0 hours 00:24:02.122 Unsafe Shutdowns: 0 00:24:02.122 Unrecoverable Media Errors: 0 00:24:02.122 Lifetime Error Log Entries: 0 00:24:02.122 Warning Temperature Time: 0 minutes 00:24:02.122 Critical Temperature Time: 0 minutes 00:24:02.122 00:24:02.122 Number of Queues 00:24:02.122 ================ 00:24:02.122 Number of I/O Submission Queues: 127 00:24:02.122 Number of I/O Completion Queues: 127 00:24:02.122 00:24:02.122 Active Namespaces 00:24:02.122 ================= 00:24:02.122 Namespace ID:1 00:24:02.122 Error Recovery Timeout: Unlimited 00:24:02.122 Command Set Identifier: NVM (00h) 00:24:02.122 Deallocate: Supported 00:24:02.122 Deallocated/Unwritten Error: Not Supported 00:24:02.122 Deallocated Read Value: Unknown 00:24:02.122 Deallocate in Write Zeroes: Not Supported 00:24:02.122 Deallocated Guard Field: 0xFFFF 00:24:02.122 Flush: Supported 00:24:02.122 Reservation: Supported 00:24:02.122 Namespace Sharing Capabilities: Multiple Controllers 00:24:02.122 Size (in LBAs): 131072 (0GiB) 00:24:02.122 Capacity (in LBAs): 131072 (0GiB) 00:24:02.122 Utilization (in LBAs): 131072 (0GiB) 00:24:02.122 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:02.122 EUI64: ABCDEF0123456789 00:24:02.122 UUID: defc1273-5d50-4854-b9f8-102c344dbd0e 00:24:02.122 Thin Provisioning: Not Supported 00:24:02.122 Per-NS Atomic Units: Yes 00:24:02.122 Atomic Boundary Size (Normal): 0 00:24:02.122 Atomic Boundary Size (PFail): 0 00:24:02.122 Atomic Boundary Offset: 0 00:24:02.122 Maximum Single Source Range Length: 65535 00:24:02.122 Maximum Copy Length: 65535 00:24:02.122 Maximum Source Range Count: 1 00:24:02.122 NGUID/EUI64 Never Reused: No 00:24:02.122 Namespace Write Protected: No 00:24:02.122 Number of LBA Formats: 1 00:24:02.122 Current LBA Format: LBA Format #00 00:24:02.122 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:02.122 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.122 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.122 rmmod nvme_tcp 00:24:02.122 rmmod nvme_fabrics 00:24:02.379 rmmod nvme_keyring 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 132835 ']' 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 132835 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 132835 ']' 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 132835 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132835 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132835' 00:24:02.379 killing process with pid 132835 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 132835 00:24:02.379 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 132835 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.638 20:22:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.539 20:22:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.539 00:24:04.539 real 0m9.179s 00:24:04.539 user 0m8.125s 00:24:04.539 sys 0m4.244s 00:24:04.539 20:22:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.539 20:22:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:04.539 ************************************ 00:24:04.539 END TEST nvmf_identify 00:24:04.539 ************************************ 00:24:04.539 20:22:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:04.539 20:22:29 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:04.539 20:22:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.539 20:22:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.539 20:22:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.797 ************************************ 00:24:04.797 START TEST nvmf_perf 00:24:04.797 ************************************ 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:04.797 * Looking for test storage... 00:24:04.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.797 20:22:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.797 20:22:30 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.798 20:22:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.069 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.069 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.069 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.069 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.069 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:24:10.335 00:24:10.335 --- 10.0.0.2 ping statistics --- 00:24:10.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.335 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:24:10.335 00:24:10.335 --- 10.0.0.1 ping statistics --- 00:24:10.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.335 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=136755 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 136755 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 136755 ']' 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.335 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.335 [2024-07-15 20:22:35.659459] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:24:10.335 [2024-07-15 20:22:35.659514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.593 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.593 [2024-07-15 20:22:35.747935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.593 [2024-07-15 20:22:35.840941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.593 [2024-07-15 20:22:35.840984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.593 [2024-07-15 20:22:35.840995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.593 [2024-07-15 20:22:35.841003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.593 [2024-07-15 20:22:35.841011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.593 [2024-07-15 20:22:35.841058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.593 [2024-07-15 20:22:35.841183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.593 [2024-07-15 20:22:35.841206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.593 [2024-07-15 20:22:35.841209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:10.850 20:22:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:14.123 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:14.123 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:14.123 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:24:14.123 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:14.380 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:14.380 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:24:14.380 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:14.380 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:14.380 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:14.640 [2024-07-15 20:22:39.887359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.640 20:22:39 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.898 20:22:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:14.898 20:22:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.156 20:22:40 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:15.156 20:22:40 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:15.414 20:22:40 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.671 [2024-07-15 20:22:40.917703] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.671 20:22:40 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:15.929 20:22:41 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:24:15.929 20:22:41 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:24:15.929 20:22:41 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:15.929 20:22:41 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:24:17.302 Initializing NVMe Controllers 00:24:17.302 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:24:17.302 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:24:17.302 Initialization complete. Launching workers. 00:24:17.302 ======================================================== 00:24:17.302 Latency(us) 00:24:17.302 Device Information : IOPS MiB/s Average min max 00:24:17.302 PCIE (0000:86:00.0) NSID 1 from core 0: 69541.60 271.65 459.40 33.07 4412.33 00:24:17.302 ======================================================== 00:24:17.302 Total : 69541.60 271.65 459.40 33.07 4412.33 00:24:17.302 00:24:17.302 20:22:42 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:17.302 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.676 Initializing NVMe Controllers 00:24:18.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:18.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:18.676 Initialization complete. Launching workers. 00:24:18.676 ======================================================== 00:24:18.676 Latency(us) 00:24:18.676 Device Information : IOPS MiB/s Average min max 00:24:18.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.38 0.43 9429.23 160.86 47395.97 00:24:18.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.75 0.18 22899.52 7944.30 55894.99 00:24:18.676 ======================================================== 00:24:18.676 Total : 156.13 0.61 13375.94 160.86 55894.99 00:24:18.676 00:24:18.676 20:22:43 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:18.676 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.051 Initializing NVMe Controllers 00:24:20.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.051 Initialization complete. Launching workers. 00:24:20.051 ======================================================== 00:24:20.051 Latency(us) 00:24:20.051 Device Information : IOPS MiB/s Average min max 00:24:20.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7894.97 30.84 4053.97 639.79 10015.61 00:24:20.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3838.58 14.99 8364.95 6373.00 15968.75 00:24:20.051 ======================================================== 00:24:20.051 Total : 11733.55 45.83 5464.28 639.79 15968.75 00:24:20.051 00:24:20.051 20:22:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:20.051 20:22:45 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:20.051 20:22:45 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.051 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.588 Initializing NVMe Controllers 00:24:22.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.588 Controller IO queue size 128, less than required. 00:24:22.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.588 Controller IO queue size 128, less than required. 00:24:22.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:22.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:22.588 Initialization complete. Launching workers. 00:24:22.588 ======================================================== 00:24:22.588 Latency(us) 00:24:22.588 Device Information : IOPS MiB/s Average min max 00:24:22.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1374.17 343.54 95451.48 57757.40 134558.36 00:24:22.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.79 146.20 221910.01 79370.50 336813.19 00:24:22.588 ======================================================== 00:24:22.588 Total : 1958.96 489.74 133202.22 57757.40 336813.19 00:24:22.588 00:24:22.588 20:22:47 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:22.588 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.588 No valid NVMe controllers or AIO or URING devices found 00:24:22.588 Initializing NVMe Controllers 00:24:22.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.588 Controller IO queue size 128, less than required. 00:24:22.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.588 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:22.588 Controller IO queue size 128, less than required. 00:24:22.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.588 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:22.588 WARNING: Some requested NVMe devices were skipped 00:24:22.588 20:22:47 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:22.588 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.115 Initializing NVMe Controllers 00:24:25.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.116 Controller IO queue size 128, less than required. 00:24:25.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.116 Controller IO queue size 128, less than required. 00:24:25.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:25.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:25.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:25.116 Initialization complete. Launching workers. 00:24:25.116 00:24:25.116 ==================== 00:24:25.116 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:25.116 TCP transport: 00:24:25.116 polls: 17751 00:24:25.116 idle_polls: 8236 00:24:25.116 sock_completions: 9515 00:24:25.116 nvme_completions: 5435 00:24:25.116 submitted_requests: 8206 00:24:25.116 queued_requests: 1 00:24:25.116 00:24:25.116 ==================== 00:24:25.116 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:25.116 TCP transport: 00:24:25.116 polls: 16332 00:24:25.116 idle_polls: 6314 00:24:25.116 sock_completions: 10018 00:24:25.116 nvme_completions: 4743 00:24:25.116 submitted_requests: 7072 00:24:25.116 queued_requests: 1 00:24:25.116 ======================================================== 00:24:25.116 Latency(us) 00:24:25.116 Device Information : IOPS MiB/s Average min max 00:24:25.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1356.87 339.22 96334.48 56113.41 156189.90 00:24:25.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1184.08 296.02 110720.50 66732.17 179947.44 00:24:25.116 ======================================================== 00:24:25.116 Total : 2540.94 635.24 103038.35 56113.41 179947.44 00:24:25.116 00:24:25.116 20:22:50 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:25.116 20:22:50 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.373 rmmod nvme_tcp 00:24:25.373 rmmod nvme_fabrics 00:24:25.373 rmmod nvme_keyring 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 136755 ']' 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 136755 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 136755 ']' 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 136755 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136755 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136755' 00:24:25.373 killing process with pid 136755 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 136755 00:24:25.373 20:22:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 136755 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.297 20:22:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.205 20:22:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:29.205 00:24:29.205 real 0m24.380s 00:24:29.205 user 1m6.218s 00:24:29.205 sys 0m7.291s 00:24:29.205 20:22:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:29.205 20:22:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:29.205 ************************************ 00:24:29.205 END TEST nvmf_perf 00:24:29.205 ************************************ 00:24:29.205 20:22:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:29.205 20:22:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:29.205 20:22:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:29.205 20:22:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.205 20:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.205 ************************************ 00:24:29.205 START TEST nvmf_fio_host 00:24:29.205 ************************************ 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:29.205 * Looking for test storage... 00:24:29.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.205 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:29.206 20:22:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.483 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.483 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.483 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.483 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.483 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.484 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.743 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.743 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.743 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.743 20:22:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:24:34.743 00:24:34.743 --- 10.0.0.2 ping statistics --- 00:24:34.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.743 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:34.743 00:24:34.743 --- 10.0.0.1 ping statistics --- 00:24:34.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.743 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=143149 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 143149 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 143149 ']' 00:24:34.743 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.744 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.744 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.744 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.744 20:23:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.002 [2024-07-15 20:23:00.137987] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:24:35.002 [2024-07-15 20:23:00.138043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.002 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.002 [2024-07-15 20:23:00.223689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:35.002 [2024-07-15 20:23:00.315147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.002 [2024-07-15 20:23:00.315190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.002 [2024-07-15 20:23:00.315202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.002 [2024-07-15 20:23:00.315210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.002 [2024-07-15 20:23:00.315218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.002 [2024-07-15 20:23:00.315277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.002 [2024-07-15 20:23:00.315406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.002 [2024-07-15 20:23:00.315432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.002 [2024-07-15 20:23:00.315434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.940 20:23:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.940 20:23:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:35.940 20:23:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:35.940 [2024-07-15 20:23:01.230549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.940 20:23:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:35.940 20:23:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.940 20:23:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.199 20:23:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:36.199 Malloc1 00:24:36.458 20:23:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.458 20:23:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:36.715 20:23:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.715 [2024-07-15 20:23:02.061147] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.974 20:23:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:37.233 20:23:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:37.233 20:23:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.233 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.233 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:37.233 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:37.234 20:23:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:37.506 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:37.506 fio-3.35 00:24:37.506 Starting 1 thread 00:24:37.506 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.077 00:24:40.077 test: (groupid=0, jobs=1): err= 0: pid=143834: Mon Jul 15 20:23:05 2024 00:24:40.077 read: IOPS=8023, BW=31.3MiB/s (32.9MB/s)(62.9MiB/2007msec) 00:24:40.077 slat (usec): min=2, max=241, avg= 2.59, stdev= 2.70 00:24:40.077 clat (usec): min=3206, max=14803, avg=8776.91, stdev=737.90 00:24:40.077 lat (usec): min=3238, max=14805, avg=8779.49, stdev=737.68 00:24:40.077 clat percentiles (usec): 00:24:40.077 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:24:40.077 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:24:40.077 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:24:40.077 | 99.00th=[10421], 99.50th=[10552], 99.90th=[14091], 99.95th=[14484], 00:24:40.077 | 99.99th=[14746] 00:24:40.077 bw ( KiB/s): min=31280, max=32456, per=99.89%, avg=32060.00, stdev=543.94, samples=4 00:24:40.077 iops : min= 7820, max= 8114, avg=8015.00, stdev=135.99, samples=4 00:24:40.077 write: IOPS=7995, BW=31.2MiB/s (32.7MB/s)(62.7MiB/2007msec); 0 zone resets 00:24:40.077 slat (usec): min=2, max=231, avg= 2.68, stdev= 2.00 00:24:40.077 clat (usec): min=2467, max=13409, avg=7147.67, stdev=583.28 00:24:40.077 lat (usec): min=2483, max=13411, avg=7150.35, stdev=583.08 00:24:40.077 clat percentiles (usec): 00:24:40.077 | 1.00th=[ 5800], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:24:40.077 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:24:40.077 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:24:40.077 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[10814], 99.95th=[12649], 00:24:40.077 | 99.99th=[13304] 00:24:40.077 bw ( KiB/s): min=31808, max=32080, per=100.00%, avg=31988.00, stdev=124.88, samples=4 00:24:40.077 iops : min= 7952, max= 8020, avg=7997.00, stdev=31.22, samples=4 00:24:40.077 lat (msec) : 4=0.12%, 10=98.17%, 20=1.71% 00:24:40.077 cpu : usr=72.98%, sys=24.53%, ctx=67, majf=0, minf=6 00:24:40.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:40.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:40.077 issued rwts: total=16104,16047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:40.077 00:24:40.077 Run status group 0 (all jobs): 00:24:40.077 READ: bw=31.3MiB/s (32.9MB/s), 31.3MiB/s-31.3MiB/s (32.9MB/s-32.9MB/s), io=62.9MiB (66.0MB), run=2007-2007msec 00:24:40.077 WRITE: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.7MiB (65.7MB), run=2007-2007msec 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:40.077 20:23:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:40.335 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:40.336 fio-3.35 00:24:40.336 Starting 1 thread 00:24:40.336 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.869 00:24:42.869 test: (groupid=0, jobs=1): err= 0: pid=144486: Mon Jul 15 20:23:07 2024 00:24:42.870 read: IOPS=8334, BW=130MiB/s (137MB/s)(261MiB/2004msec) 00:24:42.870 slat (usec): min=3, max=125, avg= 4.20, stdev= 1.55 00:24:42.870 clat (usec): min=2265, max=19410, avg=9000.53, stdev=2181.59 00:24:42.870 lat (usec): min=2269, max=19414, avg=9004.74, stdev=2181.66 00:24:42.870 clat percentiles (usec): 00:24:42.870 | 1.00th=[ 4490], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7177], 00:24:42.870 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:24:42.870 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11600], 95.00th=[12649], 00:24:42.870 | 99.00th=[14877], 99.50th=[15926], 99.90th=[18482], 99.95th=[18744], 00:24:42.870 | 99.99th=[19268] 00:24:42.870 bw ( KiB/s): min=56864, max=81184, per=51.73%, avg=68992.00, stdev=10742.69, samples=4 00:24:42.870 iops : min= 3554, max= 5074, avg=4312.00, stdev=671.42, samples=4 00:24:42.870 write: IOPS=4914, BW=76.8MiB/s (80.5MB/s)(141MiB/1837msec); 0 zone resets 00:24:42.870 slat (usec): min=45, max=379, avg=47.03, stdev= 6.55 00:24:42.870 clat (usec): min=2493, max=19577, avg=10881.34, stdev=2057.24 00:24:42.870 lat (usec): min=2539, max=19623, avg=10928.37, stdev=2057.51 00:24:42.870 clat percentiles (usec): 00:24:42.870 | 1.00th=[ 6783], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:24:42.870 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:24:42.870 | 70.00th=[11600], 80.00th=[12387], 90.00th=[13566], 95.00th=[14615], 00:24:42.870 | 99.00th=[17171], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:24:42.870 | 99.99th=[19530] 00:24:42.870 bw ( KiB/s): min=58688, max=84608, per=91.43%, avg=71896.00, stdev=11376.45, samples=4 00:24:42.870 iops : min= 3668, max= 5288, avg=4493.50, stdev=711.03, samples=4 00:24:42.870 lat (msec) : 4=0.29%, 10=56.71%, 20=43.00% 00:24:42.870 cpu : usr=89.32%, sys=9.83%, ctx=10, majf=0, minf=3 00:24:42.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:42.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:42.870 issued rwts: total=16703,9028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:42.870 00:24:42.870 Run status group 0 (all jobs): 00:24:42.870 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=261MiB (274MB), run=2004-2004msec 00:24:42.870 WRITE: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=141MiB (148MB), run=1837-1837msec 00:24:42.870 20:23:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.870 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.870 rmmod nvme_tcp 00:24:42.870 rmmod nvme_fabrics 00:24:42.870 rmmod nvme_keyring 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 143149 ']' 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 143149 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 143149 ']' 00:24:43.128 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 143149 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143149 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143149' 00:24:43.129 killing process with pid 143149 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 143149 00:24:43.129 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 143149 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.388 20:23:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.287 20:23:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.287 00:24:45.287 real 0m16.220s 00:24:45.287 user 1m1.230s 00:24:45.287 sys 0m6.208s 00:24:45.287 20:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.287 20:23:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.287 ************************************ 00:24:45.287 END TEST nvmf_fio_host 00:24:45.287 ************************************ 00:24:45.287 20:23:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:45.287 20:23:10 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:45.287 20:23:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.287 20:23:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.287 20:23:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.545 ************************************ 00:24:45.545 START TEST nvmf_failover 00:24:45.545 ************************************ 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:45.545 * Looking for test storage... 00:24:45.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.545 20:23:10 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.546 20:23:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.110 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:52.111 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:52.111 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:52.111 Found net devices under 0000:af:00.0: cvl_0_0 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:52.111 Found net devices under 0000:af:00.1: cvl_0_1 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:24:52.111 00:24:52.111 --- 10.0.0.2 ping statistics --- 00:24:52.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.111 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:24:52.111 00:24:52.111 --- 10.0.0.1 ping statistics --- 00:24:52.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.111 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=148458 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 148458 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 148458 ']' 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.111 20:23:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.111 [2024-07-15 20:23:16.532791] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:24:52.111 [2024-07-15 20:23:16.532832] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.111 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.111 [2024-07-15 20:23:16.595828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:52.111 [2024-07-15 20:23:16.684296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.111 [2024-07-15 20:23:16.684340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.111 [2024-07-15 20:23:16.684354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.111 [2024-07-15 20:23:16.684362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.111 [2024-07-15 20:23:16.684369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.111 [2024-07-15 20:23:16.684478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.111 [2024-07-15 20:23:16.684507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.111 [2024-07-15 20:23:16.684510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.111 20:23:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:52.370 [2024-07-15 20:23:17.664501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.370 20:23:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:52.629 Malloc0 00:24:52.888 20:23:17 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.888 20:23:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.147 20:23:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.406 [2024-07-15 20:23:18.708194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.406 20:23:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:53.665 [2024-07-15 20:23:18.968950] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:53.665 20:23:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:53.924 [2024-07-15 20:23:19.229869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=149009 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 149009 /var/tmp/bdevperf.sock 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 149009 ']' 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.924 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:54.492 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.492 20:23:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:54.492 20:23:19 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.751 NVMe0n1 00:24:54.751 20:23:20 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:55.009 00:24:55.267 20:23:20 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=149182 00:24:55.267 20:23:20 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.267 20:23:20 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:56.201 20:23:21 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.460 [2024-07-15 20:23:21.598090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 [2024-07-15 20:23:21.598372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13724d0 is same with the state(5) to be set 00:24:56.460 20:23:21 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:59.745 20:23:24 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.745 00:24:59.745 20:23:24 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.003 20:23:25 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:03.287 20:23:28 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.287 [2024-07-15 20:23:28.455424] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.287 20:23:28 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:04.222 20:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:04.480 [2024-07-15 20:23:29.726648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 [2024-07-15 20:23:29.726948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1374950 is same with the state(5) to be set 00:25:04.480 20:23:29 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 149182 00:25:11.051 0 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 149009 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 149009 ']' 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 149009 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149009 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149009' 00:25:11.051 killing process with pid 149009 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 149009 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 149009 00:25:11.051 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:11.051 [2024-07-15 20:23:19.313216] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:25:11.051 [2024-07-15 20:23:19.313288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149009 ] 00:25:11.051 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.051 [2024-07-15 20:23:19.396402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.051 [2024-07-15 20:23:19.482334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.051 Running I/O for 15 seconds... 00:25:11.051 [2024-07-15 20:23:21.599236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.051 [2024-07-15 20:23:21.599448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.051 [2024-07-15 20:23:21.599876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.051 [2024-07-15 20:23:21.599885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.599896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.052 [2024-07-15 20:23:21.599906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.599918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.052 [2024-07-15 20:23:21.599927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.599939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.052 [2024-07-15 20:23:21.599948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.599960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.052 [2024-07-15 20:23:21.599970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.599982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.052 [2024-07-15 20:23:21.599992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.052 [2024-07-15 20:23:21.600790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.052 [2024-07-15 20:23:21.600801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.600990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.600999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.053 [2024-07-15 20:23:21.601698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.053 [2024-07-15 20:23:21.601710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.054 [2024-07-15 20:23:21.601719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92192 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92200 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.601969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.601980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.601990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.601997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91432 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.602034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91440 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.602068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91448 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.602102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91456 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.602135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91464 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.602169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91472 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.054 [2024-07-15 20:23:21.602205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.054 [2024-07-15 20:23:21.602213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91480 len:8 PRP1 0x0 PRP2 0x0 00:25:11.054 [2024-07-15 20:23:21.602222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602276] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e1b1a0 was disconnected and freed. reset controller. 00:25:11.054 [2024-07-15 20:23:21.602291] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:11.054 [2024-07-15 20:23:21.602316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.054 [2024-07-15 20:23:21.602327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.054 [2024-07-15 20:23:21.602347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.054 [2024-07-15 20:23:21.602371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.054 [2024-07-15 20:23:21.602390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:21.602399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.054 [2024-07-15 20:23:21.602441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26a30 (9): Bad file descriptor 00:25:11.054 [2024-07-15 20:23:21.606674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.054 [2024-07-15 20:23:21.771130] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.054 [2024-07-15 20:23:25.194401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.054 [2024-07-15 20:23:25.194697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.054 [2024-07-15 20:23:25.194707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.194983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.194994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.055 [2024-07-15 20:23:25.195177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.055 [2024-07-15 20:23:25.195438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.055 [2024-07-15 20:23:25.195450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.195985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.195996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.056 [2024-07-15 20:23:25.196348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.056 [2024-07-15 20:23:25.196360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.057 [2024-07-15 20:23:25.196873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.196917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76800 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.196926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.196946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.196954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76808 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.196963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.196973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.196980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.196987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76816 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.196997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76824 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76832 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76840 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76848 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76856 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76864 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76872 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.057 [2024-07-15 20:23:25.197253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.057 [2024-07-15 20:23:25.197265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.057 [2024-07-15 20:23:25.197273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76880 len:8 PRP1 0x0 PRP2 0x0 00:25:11.057 [2024-07-15 20:23:25.197282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.058 [2024-07-15 20:23:25.197299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.058 [2024-07-15 20:23:25.197307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76888 len:8 PRP1 0x0 PRP2 0x0 00:25:11.058 [2024-07-15 20:23:25.197318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.058 [2024-07-15 20:23:25.197335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.058 [2024-07-15 20:23:25.197343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76144 len:8 PRP1 0x0 PRP2 0x0 00:25:11.058 [2024-07-15 20:23:25.197352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.058 [2024-07-15 20:23:25.197369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.058 [2024-07-15 20:23:25.197377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76152 len:8 PRP1 0x0 PRP2 0x0 00:25:11.058 [2024-07-15 20:23:25.197386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197431] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e53730 was disconnected and freed. reset controller. 00:25:11.058 [2024-07-15 20:23:25.197442] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:11.058 [2024-07-15 20:23:25.197466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.058 [2024-07-15 20:23:25.197477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.058 [2024-07-15 20:23:25.197497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.058 [2024-07-15 20:23:25.197516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.058 [2024-07-15 20:23:25.197538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:25.197547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.058 [2024-07-15 20:23:25.201780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.058 [2024-07-15 20:23:25.201814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26a30 (9): Bad file descriptor 00:25:11.058 [2024-07-15 20:23:25.282380] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.058 [2024-07-15 20:23:29.729300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.729981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.729992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.058 [2024-07-15 20:23:29.730001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.058 [2024-07-15 20:23:29.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.059 [2024-07-15 20:23:29.730022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.059 [2024-07-15 20:23:29.730766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.059 [2024-07-15 20:23:29.730777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.730981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.730993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.731002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.731028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.060 [2024-07-15 20:23:29.731050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99896 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99904 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99912 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99920 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99928 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99936 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99944 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99952 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99960 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99968 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99976 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99984 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99992 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100000 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.060 [2024-07-15 20:23:29.731675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.060 [2024-07-15 20:23:29.731682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.060 [2024-07-15 20:23:29.731690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:25:11.060 [2024-07-15 20:23:29.731699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100064 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100072 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100080 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100088 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.731968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.731977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.731984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.731992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100104 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100112 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100120 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100128 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100136 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100144 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100152 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100160 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100168 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100176 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100192 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100200 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100208 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.061 [2024-07-15 20:23:29.732521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.061 [2024-07-15 20:23:29.732528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:25:11.061 [2024-07-15 20:23:29.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.061 [2024-07-15 20:23:29.732547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.062 [2024-07-15 20:23:29.732554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.062 [2024-07-15 20:23:29.732562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:25:11.062 [2024-07-15 20:23:29.732571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.062 [2024-07-15 20:23:29.732587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.062 [2024-07-15 20:23:29.732594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:25:11.062 [2024-07-15 20:23:29.732603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.062 [2024-07-15 20:23:29.732620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.062 [2024-07-15 20:23:29.732629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:25:11.062 [2024-07-15 20:23:29.732640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.062 [2024-07-15 20:23:29.732656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.062 [2024-07-15 20:23:29.732664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:25:11.062 [2024-07-15 20:23:29.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:11.062 [2024-07-15 20:23:29.732689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:11.062 [2024-07-15 20:23:29.732697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:25:11.062 [2024-07-15 20:23:29.732706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732752] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e56740 was disconnected and freed. reset controller. 00:25:11.062 [2024-07-15 20:23:29.732763] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:11.062 [2024-07-15 20:23:29.732788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.062 [2024-07-15 20:23:29.732798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.062 [2024-07-15 20:23:29.732818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.062 [2024-07-15 20:23:29.732837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.062 [2024-07-15 20:23:29.732856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.062 [2024-07-15 20:23:29.732866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.062 [2024-07-15 20:23:29.737105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.062 [2024-07-15 20:23:29.737140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26a30 (9): Bad file descriptor 00:25:11.062 [2024-07-15 20:23:29.817554] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.062 00:25:11.062 Latency(us) 00:25:11.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.062 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.062 Verification LBA range: start 0x0 length 0x4000 00:25:11.062 NVMe0n1 : 15.00 7548.94 29.49 655.13 0.00 15567.66 610.68 16920.20 00:25:11.062 =================================================================================================================== 00:25:11.062 Total : 7548.94 29.49 655.13 0.00 15567.66 610.68 16920.20 00:25:11.062 Received shutdown signal, test time was about 15.000000 seconds 00:25:11.062 00:25:11.062 Latency(us) 00:25:11.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.062 =================================================================================================================== 00:25:11.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=151891 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 151891 /var/tmp/bdevperf.sock 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 151891 ']' 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.062 20:23:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.062 20:23:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.062 20:23:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:11.062 20:23:36 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:11.062 [2024-07-15 20:23:36.349448] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:11.062 20:23:36 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:11.321 [2024-07-15 20:23:36.610278] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:11.321 20:23:36 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:11.889 NVMe0n1 00:25:11.889 20:23:37 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.148 00:25:12.148 20:23:37 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.715 00:25:12.715 20:23:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.715 20:23:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:12.973 20:23:38 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.231 20:23:38 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:16.519 20:23:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:16.519 20:23:41 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:16.519 20:23:41 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=152943 00:25:16.520 20:23:41 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.520 20:23:41 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 152943 00:25:17.896 0 00:25:17.896 20:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.896 [2024-07-15 20:23:35.852433] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:25:17.896 [2024-07-15 20:23:35.852498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151891 ] 00:25:17.896 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.896 [2024-07-15 20:23:35.934582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.896 [2024-07-15 20:23:36.016194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.897 [2024-07-15 20:23:38.416923] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:17.897 [2024-07-15 20:23:38.416976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.897 [2024-07-15 20:23:38.416991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.897 [2024-07-15 20:23:38.417003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.897 [2024-07-15 20:23:38.417013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.897 [2024-07-15 20:23:38.417024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.897 [2024-07-15 20:23:38.417034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.897 [2024-07-15 20:23:38.417044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.897 [2024-07-15 20:23:38.417053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.897 [2024-07-15 20:23:38.417063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.897 [2024-07-15 20:23:38.417092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.897 [2024-07-15 20:23:38.417110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2da30 (9): Bad file descriptor 00:25:17.897 [2024-07-15 20:23:38.464426] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:17.897 Running I/O for 1 seconds... 00:25:17.897 00:25:17.897 Latency(us) 00:25:17.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.897 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.897 Verification LBA range: start 0x0 length 0x4000 00:25:17.897 NVMe0n1 : 1.01 7524.55 29.39 0.00 0.00 16925.96 1750.11 13166.78 00:25:17.897 =================================================================================================================== 00:25:17.897 Total : 7524.55 29.39 0.00 0.00 16925.96 1750.11 13166.78 00:25:17.897 20:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.897 20:23:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:17.897 20:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.162 20:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:18.162 20:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.421 20:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.680 20:23:43 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:21.967 20:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.967 20:23:46 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:21.967 20:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 151891 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 151891 ']' 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 151891 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151891 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151891' 00:25:21.968 killing process with pid 151891 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 151891 00:25:21.968 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 151891 00:25:22.242 20:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:22.242 20:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.539 rmmod nvme_tcp 00:25:22.539 rmmod nvme_fabrics 00:25:22.539 rmmod nvme_keyring 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 148458 ']' 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 148458 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 148458 ']' 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 148458 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:22.539 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.540 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148458 00:25:22.540 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:22.540 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:22.540 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148458' 00:25:22.540 killing process with pid 148458 00:25:22.540 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 148458 00:25:22.540 20:23:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 148458 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.842 20:23:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.378 20:23:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.378 00:25:25.378 real 0m39.478s 00:25:25.378 user 2m8.446s 00:25:25.378 sys 0m7.617s 00:25:25.378 20:23:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:25.378 20:23:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:25.378 ************************************ 00:25:25.378 END TEST nvmf_failover 00:25:25.378 ************************************ 00:25:25.378 20:23:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:25.378 20:23:50 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:25.378 20:23:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:25.378 20:23:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.378 20:23:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.378 ************************************ 00:25:25.378 START TEST nvmf_host_discovery 00:25:25.378 ************************************ 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:25.378 * Looking for test storage... 00:25:25.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.378 20:23:50 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.379 20:23:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.659 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:30.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:30.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:30.660 Found net devices under 0000:af:00.0: cvl_0_0 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:30.660 Found net devices under 0000:af:00.1: cvl_0_1 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.660 20:23:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:25:30.660 00:25:30.660 --- 10.0.0.2 ping statistics --- 00:25:30.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.660 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:30.660 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:25:30.919 00:25:30.919 --- 10.0.0.1 ping statistics --- 00:25:30.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.919 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=157473 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 157473 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 157473 ']' 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.919 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.919 [2024-07-15 20:23:56.105787] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:25:30.919 [2024-07-15 20:23:56.105842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.919 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.919 [2024-07-15 20:23:56.184368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.178 [2024-07-15 20:23:56.274167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.178 [2024-07-15 20:23:56.274211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.178 [2024-07-15 20:23:56.274221] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.178 [2024-07-15 20:23:56.274230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.178 [2024-07-15 20:23:56.274238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.178 [2024-07-15 20:23:56.274266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.178 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.179 [2024-07-15 20:23:56.410278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.179 [2024-07-15 20:23:56.418449] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.179 null0 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.179 null1 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=157540 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 157540 /tmp/host.sock 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 157540 ']' 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:31.179 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:31.179 20:23:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 [2024-07-15 20:23:56.531858] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:25:31.438 [2024-07-15 20:23:56.531966] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157540 ] 00:25:31.438 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.438 [2024-07-15 20:23:56.644395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.438 [2024-07-15 20:23:56.733939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.372 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.373 [2024-07-15 20:23:57.713971] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.373 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:32.632 20:23:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:33.199 [2024-07-15 20:23:58.420359] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.199 [2024-07-15 20:23:58.420382] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.199 [2024-07-15 20:23:58.420400] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.199 [2024-07-15 20:23:58.506698] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:33.457 [2024-07-15 20:23:58.570463] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.457 [2024-07-15 20:23:58.570486] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.715 20:23:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.715 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.716 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.974 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.234 [2024-07-15 20:23:59.370772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.234 [2024-07-15 20:23:59.371821] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:34.234 [2024-07-15 20:23:59.371854] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:34.234 [2024-07-15 20:23:59.497769] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:34.234 20:23:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:34.493 [2024-07-15 20:23:59.799165] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:34.493 [2024-07-15 20:23:59.799186] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:34.493 [2024-07-15 20:23:59.799193] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.428 [2024-07-15 20:24:00.647023] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:35.428 [2024-07-15 20:24:00.647052] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.428 [2024-07-15 20:24:00.649956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.428 [2024-07-15 20:24:00.649979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.428 [2024-07-15 20:24:00.649992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.428 [2024-07-15 20:24:00.650003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.428 [2024-07-15 20:24:00.650013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.428 [2024-07-15 20:24:00.650023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.428 [2024-07-15 20:24:00.650034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:35.428 [2024-07-15 20:24:00.650048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.428 [2024-07-15 20:24:00.650057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.428 [2024-07-15 20:24:00.659964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.428 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.428 [2024-07-15 20:24:00.670006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.428 [2024-07-15 20:24:00.670229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.428 [2024-07-15 20:24:00.670260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.428 [2024-07-15 20:24:00.670272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.428 [2024-07-15 20:24:00.670288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.428 [2024-07-15 20:24:00.670314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.428 [2024-07-15 20:24:00.670324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.428 [2024-07-15 20:24:00.670334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.428 [2024-07-15 20:24:00.670349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.428 [2024-07-15 20:24:00.680071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.428 [2024-07-15 20:24:00.680273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.428 [2024-07-15 20:24:00.680290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.428 [2024-07-15 20:24:00.680301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.429 [2024-07-15 20:24:00.680315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.429 [2024-07-15 20:24:00.680329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.429 [2024-07-15 20:24:00.680338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.429 [2024-07-15 20:24:00.680348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.429 [2024-07-15 20:24:00.680372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.429 [2024-07-15 20:24:00.690131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.429 [2024-07-15 20:24:00.690441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.429 [2024-07-15 20:24:00.690460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.429 [2024-07-15 20:24:00.690470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.429 [2024-07-15 20:24:00.690485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.429 [2024-07-15 20:24:00.690519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.429 [2024-07-15 20:24:00.690531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.429 [2024-07-15 20:24:00.690541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.429 [2024-07-15 20:24:00.690555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.429 [2024-07-15 20:24:00.700192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.429 [2024-07-15 20:24:00.700450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.429 [2024-07-15 20:24:00.700469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.429 [2024-07-15 20:24:00.700479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.429 [2024-07-15 20:24:00.700494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.429 [2024-07-15 20:24:00.700514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.429 [2024-07-15 20:24:00.700523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.429 [2024-07-15 20:24:00.700533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.429 [2024-07-15 20:24:00.700547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.429 [2024-07-15 20:24:00.710260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.429 [2024-07-15 20:24:00.710568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.429 [2024-07-15 20:24:00.710585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.429 [2024-07-15 20:24:00.710597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.429 [2024-07-15 20:24:00.710613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.429 [2024-07-15 20:24:00.710645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.429 [2024-07-15 20:24:00.710656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.429 [2024-07-15 20:24:00.710665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.429 [2024-07-15 20:24:00.710679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.429 [2024-07-15 20:24:00.720324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.429 [2024-07-15 20:24:00.720584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.429 [2024-07-15 20:24:00.720602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.429 [2024-07-15 20:24:00.720612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.429 [2024-07-15 20:24:00.720627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.429 [2024-07-15 20:24:00.720640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.429 [2024-07-15 20:24:00.720649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.429 [2024-07-15 20:24:00.720663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.429 [2024-07-15 20:24:00.720677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.429 [2024-07-15 20:24:00.730389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:35.429 [2024-07-15 20:24:00.730594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.429 [2024-07-15 20:24:00.730610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23454b0 with addr=10.0.0.2, port=4420 00:25:35.429 [2024-07-15 20:24:00.730621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23454b0 is same with the state(5) to be set 00:25:35.429 [2024-07-15 20:24:00.730635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23454b0 (9): Bad file descriptor 00:25:35.429 [2024-07-15 20:24:00.730648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:35.429 [2024-07-15 20:24:00.730657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:35.429 [2024-07-15 20:24:00.730666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:35.429 [2024-07-15 20:24:00.730680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.429 [2024-07-15 20:24:00.733745] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:35.429 [2024-07-15 20:24:00.733768] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:35.429 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.688 20:24:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.688 20:24:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.064 [2024-07-15 20:24:02.099020] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:37.064 [2024-07-15 20:24:02.099045] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:37.064 [2024-07-15 20:24:02.099061] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.064 [2024-07-15 20:24:02.230501] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:37.064 [2024-07-15 20:24:02.293702] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:37.064 [2024-07-15 20:24:02.293740] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.064 request: 00:25:37.064 { 00:25:37.064 "name": "nvme", 00:25:37.064 "trtype": "tcp", 00:25:37.064 "traddr": "10.0.0.2", 00:25:37.064 "adrfam": "ipv4", 00:25:37.064 "trsvcid": "8009", 00:25:37.064 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.064 "wait_for_attach": true, 00:25:37.064 "method": "bdev_nvme_start_discovery", 00:25:37.064 "req_id": 1 00:25:37.064 } 00:25:37.064 Got JSON-RPC error response 00:25:37.064 response: 00:25:37.064 { 00:25:37.064 "code": -17, 00:25:37.064 "message": "File exists" 00:25:37.064 } 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.064 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.323 request: 00:25:37.323 { 00:25:37.323 "name": "nvme_second", 00:25:37.323 "trtype": "tcp", 00:25:37.323 "traddr": "10.0.0.2", 00:25:37.323 "adrfam": "ipv4", 00:25:37.323 "trsvcid": "8009", 00:25:37.323 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.323 "wait_for_attach": true, 00:25:37.323 "method": "bdev_nvme_start_discovery", 00:25:37.323 "req_id": 1 00:25:37.323 } 00:25:37.323 Got JSON-RPC error response 00:25:37.323 response: 00:25:37.323 { 00:25:37.323 "code": -17, 00:25:37.323 "message": "File exists" 00:25:37.323 } 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.323 20:24:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.261 [2024-07-15 20:24:03.558157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.261 [2024-07-15 20:24:03.558194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238d640 with addr=10.0.0.2, port=8010 00:25:38.261 [2024-07-15 20:24:03.558212] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:38.261 [2024-07-15 20:24:03.558221] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:38.261 [2024-07-15 20:24:03.558230] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:39.637 [2024-07-15 20:24:04.560576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.637 [2024-07-15 20:24:04.560606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238d640 with addr=10.0.0.2, port=8010 00:25:39.637 [2024-07-15 20:24:04.560621] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:39.637 [2024-07-15 20:24:04.560629] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:39.637 [2024-07-15 20:24:04.560638] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:40.574 [2024-07-15 20:24:05.562712] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:40.574 request: 00:25:40.574 { 00:25:40.574 "name": "nvme_second", 00:25:40.574 "trtype": "tcp", 00:25:40.574 "traddr": "10.0.0.2", 00:25:40.574 "adrfam": "ipv4", 00:25:40.574 "trsvcid": "8010", 00:25:40.574 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.574 "wait_for_attach": false, 00:25:40.574 "attach_timeout_ms": 3000, 00:25:40.574 "method": "bdev_nvme_start_discovery", 00:25:40.574 "req_id": 1 00:25:40.574 } 00:25:40.574 Got JSON-RPC error response 00:25:40.574 response: 00:25:40.574 { 00:25:40.574 "code": -110, 00:25:40.574 "message": "Connection timed out" 00:25:40.574 } 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 157540 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.574 rmmod nvme_tcp 00:25:40.574 rmmod nvme_fabrics 00:25:40.574 rmmod nvme_keyring 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 157473 ']' 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 157473 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 157473 ']' 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 157473 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157473 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157473' 00:25:40.574 killing process with pid 157473 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 157473 00:25:40.574 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 157473 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.833 20:24:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.757 20:24:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.757 00:25:42.757 real 0m17.801s 00:25:42.757 user 0m22.262s 00:25:42.757 sys 0m5.823s 00:25:42.757 20:24:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:42.757 20:24:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.757 ************************************ 00:25:42.757 END TEST nvmf_host_discovery 00:25:42.757 ************************************ 00:25:42.757 20:24:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:42.757 20:24:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:42.757 20:24:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:42.757 20:24:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.757 20:24:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:42.757 ************************************ 00:25:42.757 START TEST nvmf_host_multipath_status 00:25:42.757 ************************************ 00:25:42.757 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:43.015 * Looking for test storage... 00:25:43.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.015 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:43.016 20:24:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.287 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:48.288 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:48.288 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:48.288 Found net devices under 0000:af:00.0: cvl_0_0 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:48.288 Found net devices under 0000:af:00.1: cvl_0_1 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.288 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.546 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.546 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.546 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:25:48.546 00:25:48.546 --- 10.0.0.2 ping statistics --- 00:25:48.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.546 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:25:48.546 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:25:48.547 00:25:48.547 --- 10.0.0.1 ping statistics --- 00:25:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.547 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=163461 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 163461 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 163461 ']' 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.547 20:24:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.547 [2024-07-15 20:24:13.825976] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:25:48.547 [2024-07-15 20:24:13.826032] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.547 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.805 [2024-07-15 20:24:13.910596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:48.805 [2024-07-15 20:24:13.999322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.805 [2024-07-15 20:24:13.999370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.805 [2024-07-15 20:24:13.999380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.805 [2024-07-15 20:24:13.999390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.805 [2024-07-15 20:24:13.999399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.805 [2024-07-15 20:24:13.999454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.805 [2024-07-15 20:24:13.999459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=163461 00:25:48.805 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:49.064 [2024-07-15 20:24:14.356374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.064 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:49.323 Malloc0 00:25:49.323 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:49.582 20:24:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.841 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.100 [2024-07-15 20:24:15.277112] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.100 [2024-07-15 20:24:15.429527] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=163746 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 163746 /var/tmp/bdevperf.sock 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 163746 ']' 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.100 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:50.359 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.359 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:50.359 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:50.618 20:24:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:51.185 Nvme0n1 00:25:51.185 20:24:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:51.443 Nvme0n1 00:25:51.443 20:24:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:51.443 20:24:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:53.977 20:24:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:53.977 20:24:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:53.977 20:24:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:53.977 20:24:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:54.913 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:54.913 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.913 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.913 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.171 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.171 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:55.171 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.171 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.428 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.428 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.428 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.428 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:55.687 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.687 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.687 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.687 20:24:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.944 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.944 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:55.945 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.945 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.203 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.203 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.203 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.203 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.462 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.462 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:56.462 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.720 20:24:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:56.979 20:24:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:57.916 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:57.916 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:57.916 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.916 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.176 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.176 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.176 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.176 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.434 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.434 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.434 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.434 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.693 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.693 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.693 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.693 20:24:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.952 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.952 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.952 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.952 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.211 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.211 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.211 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.211 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.470 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.470 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:59.470 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.729 20:24:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.989 20:24:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:00.999 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:00.999 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.999 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.999 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.258 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.258 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.258 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.258 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.516 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.516 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.516 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.516 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.775 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.775 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.775 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.775 20:24:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.034 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.034 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.034 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.034 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.293 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.293 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.293 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.293 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.553 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.554 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:02.554 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.813 20:24:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.071 20:24:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:04.005 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:04.005 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.005 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.005 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.262 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.262 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:04.262 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.262 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.519 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.519 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.519 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.519 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.777 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.777 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.777 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.777 20:24:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.777 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.777 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:04.777 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.777 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.035 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.035 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:05.035 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.035 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.293 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.293 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:05.293 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:05.551 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:05.807 20:24:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:06.741 20:24:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:06.741 20:24:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:06.741 20:24:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.741 20:24:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.999 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.999 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.999 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.999 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.256 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.256 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.256 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.256 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.514 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.514 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.514 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.514 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.771 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.771 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:07.771 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.771 20:24:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.029 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.029 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:08.029 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.029 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.286 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.286 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:08.286 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:08.543 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.799 20:24:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:09.732 20:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:09.732 20:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.732 20:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.732 20:24:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.991 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.991 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.991 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.991 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.248 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.249 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.249 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.249 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.506 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.506 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.506 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.506 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.764 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.764 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:10.764 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.764 20:24:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.022 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.022 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.022 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.022 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.280 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.280 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:11.537 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:11.538 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:11.795 20:24:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:12.052 20:24:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:12.986 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:12.986 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.986 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.986 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.245 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.245 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.245 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.245 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.503 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.504 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.504 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.504 20:24:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.764 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.764 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.764 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.764 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.024 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.024 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.024 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.024 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.282 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.282 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.282 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.282 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.540 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.540 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:14.540 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.799 20:24:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:15.057 20:24:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:15.994 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:15.994 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.994 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.994 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.254 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.254 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:16.254 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.254 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.513 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.513 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.513 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.513 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.772 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.772 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.772 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.772 20:24:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.032 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.032 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.032 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.032 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.291 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.291 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:17.291 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.291 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.550 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.550 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:17.550 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.809 20:24:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:17.809 20:24:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.186 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.443 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.443 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.443 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.443 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.701 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.701 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.701 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.701 20:24:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.959 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.960 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.960 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.960 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.218 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.218 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.218 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.218 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.477 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.477 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:20.477 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:20.735 20:24:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:20.993 20:24:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:21.926 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:21.926 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:21.926 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.926 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.183 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.184 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.441 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.441 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.441 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.441 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.699 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.699 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.699 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.699 20:24:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.958 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.958 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:22.958 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.958 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 163746 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 163746 ']' 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 163746 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163746 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163746' 00:26:23.218 killing process with pid 163746 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 163746 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 163746 00:26:23.218 Connection closed with partial response: 00:26:23.218 00:26:23.218 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 163746 00:26:23.218 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:23.493 [2024-07-15 20:24:15.487967] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:26:23.493 [2024-07-15 20:24:15.488013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163746 ] 00:26:23.493 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.493 [2024-07-15 20:24:15.533169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.493 [2024-07-15 20:24:15.602572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.493 Running I/O for 90 seconds... 00:26:23.493 [2024-07-15 20:24:30.727712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.727984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.727990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.728001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.728013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.728025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.728031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.728043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.728048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.728060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.728066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.728077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.728083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.728094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.728101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.729965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.729988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.729994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.730013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.493 [2024-07-15 20:24:30.730025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.493 [2024-07-15 20:24:30.730030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.494 [2024-07-15 20:24:30.730049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.494 [2024-07-15 20:24:30.730833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-07-15 20:24:30.730840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.730989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.730994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-07-15 20:24:30.731655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.731676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.731698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.731921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.731945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.731968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.731985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.731991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.732007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.495 [2024-07-15 20:24:30.732013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.495 [2024-07-15 20:24:30.732030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:30.732349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:30.732517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.496 [2024-07-15 20:24:30.732523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.129600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.129608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.496 [2024-07-15 20:24:46.130220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.496 [2024-07-15 20:24:46.130235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.130328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.130353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.130377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.130981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.130997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.131287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.131312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.131340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.131365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.497 [2024-07-15 20:24:46.131941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.131971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.131987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.497 [2024-07-15 20:24:46.131997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.497 [2024-07-15 20:24:46.132012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.132881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.132981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.132995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.498 [2024-07-15 20:24:46.133790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.498 [2024-07-15 20:24:46.133915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.498 [2024-07-15 20:24:46.133930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.133941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.133959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.133970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.133984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.133994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.134019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.134045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.134855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.134882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.134906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.134930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.134955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.134980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.134995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.135909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.135983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.135998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.136008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.136023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.136033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.136050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.499 [2024-07-15 20:24:46.136060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.499 [2024-07-15 20:24:46.136075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.499 [2024-07-15 20:24:46.136085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.136100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.136110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.136125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.136135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.136317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.137815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.137979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.137994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.138003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.138028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.138106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.138131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.138973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.138990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.139000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.139015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.500 [2024-07-15 20:24:46.139025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.500 [2024-07-15 20:24:46.139040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.500 [2024-07-15 20:24:46.139049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.139888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.139931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.139941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.140275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.140302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.140327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.140453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.140503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.501 [2024-07-15 20:24:46.140531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.501 [2024-07-15 20:24:46.140800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.501 [2024-07-15 20:24:46.140811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.140825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.140836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.140851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.140861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.140876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.140886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.140902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.141633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.141649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.141666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.141679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.141694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.141703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.141719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.141728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.141743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.141753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.141769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.141779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.142638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.142662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.142691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.142716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.142741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.142780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.142790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.502 [2024-07-15 20:24:46.143569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.502 [2024-07-15 20:24:46.143747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.502 [2024-07-15 20:24:46.143763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.143772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.143788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.143798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.143813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.143823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.143839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.143848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.144738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.144759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.144776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.144785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.144800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.144810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.144825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.144834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.144850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.144859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.144875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.144884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.145336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.145364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.145390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.145440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.145455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.145465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.503 [2024-07-15 20:24:46.146530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.503 [2024-07-15 20:24:46.146751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.503 [2024-07-15 20:24:46.146766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.146783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.146793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.146808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.146819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.146835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.146845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.146863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.146873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.146889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.146899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.146914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.146924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.147934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.147984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.147999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.148009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.148025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.148034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.148138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.504 [2024-07-15 20:24:46.148150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.504 [2024-07-15 20:24:46.148166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.504 [2024-07-15 20:24:46.148177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.148192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.148202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.148218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.148231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.148896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.148911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.148928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.148953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.148963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.148979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.148989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.149823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.149851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.149875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.149899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.149915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.149924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.150448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.150474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.150498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.150524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.150548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.150572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.150597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.505 [2024-07-15 20:24:46.150646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.505 [2024-07-15 20:24:46.150753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.505 [2024-07-15 20:24:46.150765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.150782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.150791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.150806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.150815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.150830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.150840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.150855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.150865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.151666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.151706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.151716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.152315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.152397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.152727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.152754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.152779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.506 [2024-07-15 20:24:46.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.152969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.506 [2024-07-15 20:24:46.152983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.506 [2024-07-15 20:24:46.153172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.153182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.153209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.153406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.153675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.153940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.153950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.154061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.507 [2024-07-15 20:24:46.155776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.507 [2024-07-15 20:24:46.155816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.507 [2024-07-15 20:24:46.155826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.156018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.156044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.156069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.156138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.156164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.156284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.156393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.156403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.157687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.157697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.158778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.158903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.158992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.159004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.159080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.159092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.159834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.508 [2024-07-15 20:24:46.159850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.160803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.160818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.160838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.508 [2024-07-15 20:24:46.160849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.508 [2024-07-15 20:24:46.160863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.160873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.160897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.160912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.160922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.160937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.160946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.160961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.160971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.160986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.160995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.161019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.161044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.161384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.161394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.162731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.162757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.162781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.162806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.162830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.162855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.162879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.162907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.162923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.162932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.163689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.163717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.163742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.163766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.509 [2024-07-15 20:24:46.163894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.509 [2024-07-15 20:24:46.163920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.509 [2024-07-15 20:24:46.163935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.163944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.164085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.164136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.164160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.164185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.164213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.164238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.164364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.164391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.164416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.164441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.165292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.165308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.165325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.165334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.165348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.165358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.165374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.165383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.166887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.166911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.166936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.166961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.166976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.166985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.167243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.167374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.510 [2024-07-15 20:24:46.167398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.510 [2024-07-15 20:24:46.167437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.510 [2024-07-15 20:24:46.167447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.167461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.167471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.167489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.167499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.168599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.168983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.168993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.169018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.169229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.169263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.169462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.511 [2024-07-15 20:24:46.169564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.511 [2024-07-15 20:24:46.169675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.511 [2024-07-15 20:24:46.169686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.169703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.169712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.169728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.169755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.169765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.170326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.170348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.170373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.170397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.170422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.170438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.170447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.171731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.171795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.171804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.172551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.172580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.172607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.172632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.172658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.172683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.172709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.172725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.512 [2024-07-15 20:24:46.172735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.512 [2024-07-15 20:24:46.173797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.512 [2024-07-15 20:24:46.173812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.173822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.173846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.173871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.173897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.173921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.173946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.173970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.173985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.173995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.174010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.174020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.174180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.174195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.174212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.174222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.174237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.174247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.174268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.174279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.175784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.175800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.175810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.176160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.176187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.176211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.176272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.176307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.176331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.513 [2024-07-15 20:24:46.176357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.176467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.176494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.513 [2024-07-15 20:24:46.176519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.513 [2024-07-15 20:24:46.176535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.176847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.176896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.176982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.176998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.177008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.177024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.177146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.177158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.177174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.177184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.177236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.177247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.178071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.178854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.178881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.178904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.178929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.178953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.178978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.178993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.179002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.179027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.179052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.179076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.179101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.179126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.179154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.179179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.179367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.179395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.180589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.180737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.180762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.514 [2024-07-15 20:24:46.180790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:23.514 [2024-07-15 20:24:46.180985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.514 [2024-07-15 20:24:46.180995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.181731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.181770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.181780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.182083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.182110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.182134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.182159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.182380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.182456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.182472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.182481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.183252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.183274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.183291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.183301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.183315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.183325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.183340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.183370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.515 [2024-07-15 20:24:46.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.183395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.183404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:23.515 [2024-07-15 20:24:46.184273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.515 [2024-07-15 20:24:46.184288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.184662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.184905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.184914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.185717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.185743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.185771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.185795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.185820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.185844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.185868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.185893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.185917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.185932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.185942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.186906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.186920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.186937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.186947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.186962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.186971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.186987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.186996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.187145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.187170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.187194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.187218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.187296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.516 [2024-07-15 20:24:46.187323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:23.516 [2024-07-15 20:24:46.187495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.516 [2024-07-15 20:24:46.187507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.187524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.187534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.187549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.187559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.187573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.187584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.187599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.187609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.188596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.188622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.188646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.188671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.188695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.188744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.188772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.188797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.188812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.188821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.517 [2024-07-15 20:24:46.189936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.189976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.189986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:23.517 [2024-07-15 20:24:46.190001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.517 [2024-07-15 20:24:46.190011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:23.517 Received shutdown signal, test time was about 31.505441 seconds 00:26:23.517 00:26:23.517 Latency(us) 00:26:23.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.517 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:23.517 Verification LBA range: start 0x0 length 0x4000 00:26:23.517 Nvme0n1 : 31.50 7536.08 29.44 0.00 0.00 16966.28 187.11 4026531.84 00:26:23.517 =================================================================================================================== 00:26:23.517 Total : 7536.08 29.44 0.00 0.00 16966.28 187.11 4026531.84 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.517 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.776 rmmod nvme_tcp 00:26:23.776 rmmod nvme_fabrics 00:26:23.776 rmmod nvme_keyring 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 163461 ']' 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 163461 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 163461 ']' 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 163461 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163461 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163461' 00:26:23.776 killing process with pid 163461 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 163461 00:26:23.776 20:24:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 163461 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.036 20:24:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.941 20:24:51 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.941 00:26:25.941 real 0m43.159s 00:26:25.941 user 2m1.766s 00:26:25.941 sys 0m11.471s 00:26:25.941 20:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.941 20:24:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:25.941 ************************************ 00:26:25.941 END TEST nvmf_host_multipath_status 00:26:25.941 ************************************ 00:26:25.941 20:24:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:25.941 20:24:51 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:25.941 20:24:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:25.941 20:24:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.941 20:24:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:26.200 ************************************ 00:26:26.200 START TEST nvmf_discovery_remove_ifc 00:26:26.200 ************************************ 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:26.200 * Looking for test storage... 00:26:26.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.200 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.201 20:24:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.474 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:31.475 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:31.475 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:31.475 Found net devices under 0000:af:00.0: cvl_0_0 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:31.475 Found net devices under 0000:af:00.1: cvl_0_1 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.475 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:31.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:26:31.759 00:26:31.759 --- 10.0.0.2 ping statistics --- 00:26:31.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.759 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:31.759 00:26:31.759 --- 10.0.0.1 ping statistics --- 00:26:31.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.759 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=173209 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 173209 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 173209 ']' 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.759 20:24:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.759 [2024-07-15 20:24:57.027290] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:26:31.759 [2024-07-15 20:24:57.027345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.759 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.053 [2024-07-15 20:24:57.103222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.053 [2024-07-15 20:24:57.195226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.053 [2024-07-15 20:24:57.195270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.053 [2024-07-15 20:24:57.195281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.053 [2024-07-15 20:24:57.195290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.053 [2024-07-15 20:24:57.195298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.053 [2024-07-15 20:24:57.195319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.053 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.053 [2024-07-15 20:24:57.346645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.053 [2024-07-15 20:24:57.354792] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:32.053 null0 00:26:32.053 [2024-07-15 20:24:57.386798] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=173445 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 173445 /tmp/host.sock 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 173445 ']' 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:32.312 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.312 [2024-07-15 20:24:57.459876] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:26:32.312 [2024-07-15 20:24:57.459928] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173445 ] 00:26:32.312 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.312 [2024-07-15 20:24:57.540095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.312 [2024-07-15 20:24:57.630068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.312 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.571 20:24:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.509 [2024-07-15 20:24:58.763751] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:33.509 [2024-07-15 20:24:58.763774] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:33.509 [2024-07-15 20:24:58.763791] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.509 [2024-07-15 20:24:58.850092] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:33.768 [2024-07-15 20:24:58.907717] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:33.768 [2024-07-15 20:24:58.907772] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:33.768 [2024-07-15 20:24:58.907800] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:33.768 [2024-07-15 20:24:58.907816] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.768 [2024-07-15 20:24:58.907840] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.768 [2024-07-15 20:24:58.912440] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d4d3a0 was disconnected and freed. delete nvme_qpair. 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:33.768 20:24:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.768 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.027 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.027 20:24:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.963 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.963 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.964 20:25:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.900 20:25:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.277 20:25:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.212 20:25:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.149 [2024-07-15 20:25:04.348707] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:39.149 [2024-07-15 20:25:04.348754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.149 [2024-07-15 20:25:04.348774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.149 [2024-07-15 20:25:04.348787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.149 [2024-07-15 20:25:04.348797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.149 [2024-07-15 20:25:04.348807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.149 [2024-07-15 20:25:04.348817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.149 [2024-07-15 20:25:04.348827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.149 [2024-07-15 20:25:04.348837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.149 [2024-07-15 20:25:04.348848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:39.149 [2024-07-15 20:25:04.348857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.149 [2024-07-15 20:25:04.348866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13c40 is same with the state(5) to be set 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.149 [2024-07-15 20:25:04.358734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13c40 (9): Bad file descriptor 00:26:39.149 [2024-07-15 20:25:04.368777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.149 20:25:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.084 [2024-07-15 20:25:05.410311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.084 [2024-07-15 20:25:05.410391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13c40 with addr=10.0.0.2, port=4420 00:26:40.084 [2024-07-15 20:25:05.410423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13c40 is same with the state(5) to be set 00:26:40.084 [2024-07-15 20:25:05.410479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13c40 (9): Bad file descriptor 00:26:40.084 [2024-07-15 20:25:05.410576] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:40.084 [2024-07-15 20:25:05.410614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.084 [2024-07-15 20:25:05.410634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.084 [2024-07-15 20:25:05.410655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.084 [2024-07-15 20:25:05.410702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.084 [2024-07-15 20:25:05.410724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.084 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.342 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.342 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.342 20:25:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.275 [2024-07-15 20:25:06.413225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:41.275 [2024-07-15 20:25:06.413251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:41.275 [2024-07-15 20:25:06.413267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:41.275 [2024-07-15 20:25:06.413277] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:41.275 [2024-07-15 20:25:06.413293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.275 [2024-07-15 20:25:06.413315] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:41.275 [2024-07-15 20:25:06.413341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.275 [2024-07-15 20:25:06.413354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.275 [2024-07-15 20:25:06.413366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.275 [2024-07-15 20:25:06.413376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.275 [2024-07-15 20:25:06.413387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.275 [2024-07-15 20:25:06.413396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.275 [2024-07-15 20:25:06.413406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.275 [2024-07-15 20:25:06.413416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.275 [2024-07-15 20:25:06.413426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.275 [2024-07-15 20:25:06.413435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.275 [2024-07-15 20:25:06.413444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:41.275 [2024-07-15 20:25:06.413946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13080 (9): Bad file descriptor 00:26:41.275 [2024-07-15 20:25:06.414958] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:41.275 [2024-07-15 20:25:06.414973] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.276 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.533 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:41.533 20:25:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:42.465 20:25:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.400 [2024-07-15 20:25:08.471062] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.400 [2024-07-15 20:25:08.471083] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.400 [2024-07-15 20:25:08.471100] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.400 [2024-07-15 20:25:08.599576] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:43.400 [2024-07-15 20:25:08.701557] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:43.400 [2024-07-15 20:25:08.701600] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:43.400 [2024-07-15 20:25:08.701628] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:43.400 [2024-07-15 20:25:08.701644] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:43.400 [2024-07-15 20:25:08.701654] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:43.400 [2024-07-15 20:25:08.708164] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1d1a710 was disconnected and freed. delete nvme_qpair. 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.400 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 173445 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 173445 ']' 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 173445 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173445 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173445' 00:26:43.659 killing process with pid 173445 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 173445 00:26:43.659 20:25:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 173445 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:43.918 rmmod nvme_tcp 00:26:43.918 rmmod nvme_fabrics 00:26:43.918 rmmod nvme_keyring 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 173209 ']' 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 173209 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 173209 ']' 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 173209 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173209 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173209' 00:26:43.918 killing process with pid 173209 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 173209 00:26:43.918 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 173209 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.177 20:25:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.081 20:25:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:46.081 00:26:46.081 real 0m20.107s 00:26:46.081 user 0m24.754s 00:26:46.081 sys 0m5.522s 00:26:46.081 20:25:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.081 20:25:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.081 ************************************ 00:26:46.081 END TEST nvmf_discovery_remove_ifc 00:26:46.081 ************************************ 00:26:46.341 20:25:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:46.341 20:25:11 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:46.341 20:25:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:46.341 20:25:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.341 20:25:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.341 ************************************ 00:26:46.341 START TEST nvmf_identify_kernel_target 00:26:46.341 ************************************ 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:46.341 * Looking for test storage... 00:26:46.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.341 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.342 20:25:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:51.614 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.614 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:51.614 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:51.615 Found net devices under 0000:af:00.0: cvl_0_0 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:51.615 Found net devices under 0000:af:00.1: cvl_0_1 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:26:51.615 00:26:51.615 --- 10.0.0.2 ping statistics --- 00:26:51.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.615 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:26:51.615 00:26:51.615 --- 10.0.0.1 ping statistics --- 00:26:51.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.615 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:51.615 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:51.875 20:25:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:51.875 20:25:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:51.875 20:25:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:54.412 Waiting for block devices as requested 00:26:54.412 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:26:54.412 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:54.672 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:54.672 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:54.672 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:54.672 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:54.930 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:54.930 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:54.930 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:55.189 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:55.189 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:55.189 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:55.189 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:55.447 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:55.447 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:55.447 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:55.447 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:55.706 No valid GPT data, bailing 00:26:55.706 20:25:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:55.706 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:55.707 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:55.966 00:26:55.966 Discovery Log Number of Records 2, Generation counter 2 00:26:55.966 =====Discovery Log Entry 0====== 00:26:55.966 trtype: tcp 00:26:55.966 adrfam: ipv4 00:26:55.966 subtype: current discovery subsystem 00:26:55.966 treq: not specified, sq flow control disable supported 00:26:55.966 portid: 1 00:26:55.966 trsvcid: 4420 00:26:55.966 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.966 traddr: 10.0.0.1 00:26:55.966 eflags: none 00:26:55.966 sectype: none 00:26:55.966 =====Discovery Log Entry 1====== 00:26:55.966 trtype: tcp 00:26:55.966 adrfam: ipv4 00:26:55.966 subtype: nvme subsystem 00:26:55.966 treq: not specified, sq flow control disable supported 00:26:55.966 portid: 1 00:26:55.966 trsvcid: 4420 00:26:55.966 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:55.966 traddr: 10.0.0.1 00:26:55.966 eflags: none 00:26:55.966 sectype: none 00:26:55.966 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:55.966 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:55.966 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.966 ===================================================== 00:26:55.966 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:55.966 ===================================================== 00:26:55.966 Controller Capabilities/Features 00:26:55.966 ================================ 00:26:55.966 Vendor ID: 0000 00:26:55.966 Subsystem Vendor ID: 0000 00:26:55.966 Serial Number: 8a1a57cb83b62e59cc62 00:26:55.966 Model Number: Linux 00:26:55.966 Firmware Version: 6.7.0-68 00:26:55.966 Recommended Arb Burst: 0 00:26:55.966 IEEE OUI Identifier: 00 00 00 00:26:55.966 Multi-path I/O 00:26:55.966 May have multiple subsystem ports: No 00:26:55.966 May have multiple controllers: No 00:26:55.966 Associated with SR-IOV VF: No 00:26:55.966 Max Data Transfer Size: Unlimited 00:26:55.966 Max Number of Namespaces: 0 00:26:55.966 Max Number of I/O Queues: 1024 00:26:55.966 NVMe Specification Version (VS): 1.3 00:26:55.966 NVMe Specification Version (Identify): 1.3 00:26:55.966 Maximum Queue Entries: 1024 00:26:55.966 Contiguous Queues Required: No 00:26:55.966 Arbitration Mechanisms Supported 00:26:55.967 Weighted Round Robin: Not Supported 00:26:55.967 Vendor Specific: Not Supported 00:26:55.967 Reset Timeout: 7500 ms 00:26:55.967 Doorbell Stride: 4 bytes 00:26:55.967 NVM Subsystem Reset: Not Supported 00:26:55.967 Command Sets Supported 00:26:55.967 NVM Command Set: Supported 00:26:55.967 Boot Partition: Not Supported 00:26:55.967 Memory Page Size Minimum: 4096 bytes 00:26:55.967 Memory Page Size Maximum: 4096 bytes 00:26:55.967 Persistent Memory Region: Not Supported 00:26:55.967 Optional Asynchronous Events Supported 00:26:55.967 Namespace Attribute Notices: Not Supported 00:26:55.967 Firmware Activation Notices: Not Supported 00:26:55.967 ANA Change Notices: Not Supported 00:26:55.967 PLE Aggregate Log Change Notices: Not Supported 00:26:55.967 LBA Status Info Alert Notices: Not Supported 00:26:55.967 EGE Aggregate Log Change Notices: Not Supported 00:26:55.967 Normal NVM Subsystem Shutdown event: Not Supported 00:26:55.967 Zone Descriptor Change Notices: Not Supported 00:26:55.967 Discovery Log Change Notices: Supported 00:26:55.967 Controller Attributes 00:26:55.967 128-bit Host Identifier: Not Supported 00:26:55.967 Non-Operational Permissive Mode: Not Supported 00:26:55.967 NVM Sets: Not Supported 00:26:55.967 Read Recovery Levels: Not Supported 00:26:55.967 Endurance Groups: Not Supported 00:26:55.967 Predictable Latency Mode: Not Supported 00:26:55.967 Traffic Based Keep ALive: Not Supported 00:26:55.967 Namespace Granularity: Not Supported 00:26:55.967 SQ Associations: Not Supported 00:26:55.967 UUID List: Not Supported 00:26:55.967 Multi-Domain Subsystem: Not Supported 00:26:55.967 Fixed Capacity Management: Not Supported 00:26:55.967 Variable Capacity Management: Not Supported 00:26:55.967 Delete Endurance Group: Not Supported 00:26:55.967 Delete NVM Set: Not Supported 00:26:55.967 Extended LBA Formats Supported: Not Supported 00:26:55.967 Flexible Data Placement Supported: Not Supported 00:26:55.967 00:26:55.967 Controller Memory Buffer Support 00:26:55.967 ================================ 00:26:55.967 Supported: No 00:26:55.967 00:26:55.967 Persistent Memory Region Support 00:26:55.967 ================================ 00:26:55.967 Supported: No 00:26:55.967 00:26:55.967 Admin Command Set Attributes 00:26:55.967 ============================ 00:26:55.967 Security Send/Receive: Not Supported 00:26:55.967 Format NVM: Not Supported 00:26:55.967 Firmware Activate/Download: Not Supported 00:26:55.967 Namespace Management: Not Supported 00:26:55.967 Device Self-Test: Not Supported 00:26:55.967 Directives: Not Supported 00:26:55.967 NVMe-MI: Not Supported 00:26:55.967 Virtualization Management: Not Supported 00:26:55.967 Doorbell Buffer Config: Not Supported 00:26:55.967 Get LBA Status Capability: Not Supported 00:26:55.967 Command & Feature Lockdown Capability: Not Supported 00:26:55.967 Abort Command Limit: 1 00:26:55.967 Async Event Request Limit: 1 00:26:55.967 Number of Firmware Slots: N/A 00:26:55.967 Firmware Slot 1 Read-Only: N/A 00:26:55.967 Firmware Activation Without Reset: N/A 00:26:55.967 Multiple Update Detection Support: N/A 00:26:55.967 Firmware Update Granularity: No Information Provided 00:26:55.967 Per-Namespace SMART Log: No 00:26:55.967 Asymmetric Namespace Access Log Page: Not Supported 00:26:55.967 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:55.967 Command Effects Log Page: Not Supported 00:26:55.967 Get Log Page Extended Data: Supported 00:26:55.967 Telemetry Log Pages: Not Supported 00:26:55.967 Persistent Event Log Pages: Not Supported 00:26:55.967 Supported Log Pages Log Page: May Support 00:26:55.967 Commands Supported & Effects Log Page: Not Supported 00:26:55.967 Feature Identifiers & Effects Log Page:May Support 00:26:55.967 NVMe-MI Commands & Effects Log Page: May Support 00:26:55.967 Data Area 4 for Telemetry Log: Not Supported 00:26:55.967 Error Log Page Entries Supported: 1 00:26:55.967 Keep Alive: Not Supported 00:26:55.967 00:26:55.967 NVM Command Set Attributes 00:26:55.967 ========================== 00:26:55.967 Submission Queue Entry Size 00:26:55.967 Max: 1 00:26:55.967 Min: 1 00:26:55.967 Completion Queue Entry Size 00:26:55.967 Max: 1 00:26:55.967 Min: 1 00:26:55.967 Number of Namespaces: 0 00:26:55.967 Compare Command: Not Supported 00:26:55.967 Write Uncorrectable Command: Not Supported 00:26:55.967 Dataset Management Command: Not Supported 00:26:55.967 Write Zeroes Command: Not Supported 00:26:55.967 Set Features Save Field: Not Supported 00:26:55.967 Reservations: Not Supported 00:26:55.967 Timestamp: Not Supported 00:26:55.967 Copy: Not Supported 00:26:55.967 Volatile Write Cache: Not Present 00:26:55.967 Atomic Write Unit (Normal): 1 00:26:55.967 Atomic Write Unit (PFail): 1 00:26:55.967 Atomic Compare & Write Unit: 1 00:26:55.967 Fused Compare & Write: Not Supported 00:26:55.967 Scatter-Gather List 00:26:55.967 SGL Command Set: Supported 00:26:55.967 SGL Keyed: Not Supported 00:26:55.967 SGL Bit Bucket Descriptor: Not Supported 00:26:55.967 SGL Metadata Pointer: Not Supported 00:26:55.967 Oversized SGL: Not Supported 00:26:55.967 SGL Metadata Address: Not Supported 00:26:55.967 SGL Offset: Supported 00:26:55.967 Transport SGL Data Block: Not Supported 00:26:55.967 Replay Protected Memory Block: Not Supported 00:26:55.967 00:26:55.967 Firmware Slot Information 00:26:55.967 ========================= 00:26:55.967 Active slot: 0 00:26:55.967 00:26:55.967 00:26:55.967 Error Log 00:26:55.967 ========= 00:26:55.967 00:26:55.967 Active Namespaces 00:26:55.967 ================= 00:26:55.967 Discovery Log Page 00:26:55.967 ================== 00:26:55.967 Generation Counter: 2 00:26:55.967 Number of Records: 2 00:26:55.967 Record Format: 0 00:26:55.967 00:26:55.967 Discovery Log Entry 0 00:26:55.967 ---------------------- 00:26:55.967 Transport Type: 3 (TCP) 00:26:55.967 Address Family: 1 (IPv4) 00:26:55.967 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:55.967 Entry Flags: 00:26:55.967 Duplicate Returned Information: 0 00:26:55.967 Explicit Persistent Connection Support for Discovery: 0 00:26:55.967 Transport Requirements: 00:26:55.967 Secure Channel: Not Specified 00:26:55.967 Port ID: 1 (0x0001) 00:26:55.967 Controller ID: 65535 (0xffff) 00:26:55.967 Admin Max SQ Size: 32 00:26:55.967 Transport Service Identifier: 4420 00:26:55.967 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:55.967 Transport Address: 10.0.0.1 00:26:55.967 Discovery Log Entry 1 00:26:55.967 ---------------------- 00:26:55.967 Transport Type: 3 (TCP) 00:26:55.967 Address Family: 1 (IPv4) 00:26:55.967 Subsystem Type: 2 (NVM Subsystem) 00:26:55.967 Entry Flags: 00:26:55.967 Duplicate Returned Information: 0 00:26:55.967 Explicit Persistent Connection Support for Discovery: 0 00:26:55.967 Transport Requirements: 00:26:55.967 Secure Channel: Not Specified 00:26:55.967 Port ID: 1 (0x0001) 00:26:55.967 Controller ID: 65535 (0xffff) 00:26:55.967 Admin Max SQ Size: 32 00:26:55.967 Transport Service Identifier: 4420 00:26:55.967 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:55.967 Transport Address: 10.0.0.1 00:26:55.967 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:55.967 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.967 get_feature(0x01) failed 00:26:55.967 get_feature(0x02) failed 00:26:55.967 get_feature(0x04) failed 00:26:55.967 ===================================================== 00:26:55.967 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:55.967 ===================================================== 00:26:55.967 Controller Capabilities/Features 00:26:55.967 ================================ 00:26:55.967 Vendor ID: 0000 00:26:55.967 Subsystem Vendor ID: 0000 00:26:55.967 Serial Number: 97a8df2be643ab3db1de 00:26:55.967 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:55.967 Firmware Version: 6.7.0-68 00:26:55.967 Recommended Arb Burst: 6 00:26:55.967 IEEE OUI Identifier: 00 00 00 00:26:55.967 Multi-path I/O 00:26:55.967 May have multiple subsystem ports: Yes 00:26:55.967 May have multiple controllers: Yes 00:26:55.967 Associated with SR-IOV VF: No 00:26:55.967 Max Data Transfer Size: Unlimited 00:26:55.967 Max Number of Namespaces: 1024 00:26:55.967 Max Number of I/O Queues: 128 00:26:55.967 NVMe Specification Version (VS): 1.3 00:26:55.967 NVMe Specification Version (Identify): 1.3 00:26:55.967 Maximum Queue Entries: 1024 00:26:55.967 Contiguous Queues Required: No 00:26:55.967 Arbitration Mechanisms Supported 00:26:55.967 Weighted Round Robin: Not Supported 00:26:55.967 Vendor Specific: Not Supported 00:26:55.967 Reset Timeout: 7500 ms 00:26:55.967 Doorbell Stride: 4 bytes 00:26:55.967 NVM Subsystem Reset: Not Supported 00:26:55.967 Command Sets Supported 00:26:55.967 NVM Command Set: Supported 00:26:55.967 Boot Partition: Not Supported 00:26:55.967 Memory Page Size Minimum: 4096 bytes 00:26:55.967 Memory Page Size Maximum: 4096 bytes 00:26:55.968 Persistent Memory Region: Not Supported 00:26:55.968 Optional Asynchronous Events Supported 00:26:55.968 Namespace Attribute Notices: Supported 00:26:55.968 Firmware Activation Notices: Not Supported 00:26:55.968 ANA Change Notices: Supported 00:26:55.968 PLE Aggregate Log Change Notices: Not Supported 00:26:55.968 LBA Status Info Alert Notices: Not Supported 00:26:55.968 EGE Aggregate Log Change Notices: Not Supported 00:26:55.968 Normal NVM Subsystem Shutdown event: Not Supported 00:26:55.968 Zone Descriptor Change Notices: Not Supported 00:26:55.968 Discovery Log Change Notices: Not Supported 00:26:55.968 Controller Attributes 00:26:55.968 128-bit Host Identifier: Supported 00:26:55.968 Non-Operational Permissive Mode: Not Supported 00:26:55.968 NVM Sets: Not Supported 00:26:55.968 Read Recovery Levels: Not Supported 00:26:55.968 Endurance Groups: Not Supported 00:26:55.968 Predictable Latency Mode: Not Supported 00:26:55.968 Traffic Based Keep ALive: Supported 00:26:55.968 Namespace Granularity: Not Supported 00:26:55.968 SQ Associations: Not Supported 00:26:55.968 UUID List: Not Supported 00:26:55.968 Multi-Domain Subsystem: Not Supported 00:26:55.968 Fixed Capacity Management: Not Supported 00:26:55.968 Variable Capacity Management: Not Supported 00:26:55.968 Delete Endurance Group: Not Supported 00:26:55.968 Delete NVM Set: Not Supported 00:26:55.968 Extended LBA Formats Supported: Not Supported 00:26:55.968 Flexible Data Placement Supported: Not Supported 00:26:55.968 00:26:55.968 Controller Memory Buffer Support 00:26:55.968 ================================ 00:26:55.968 Supported: No 00:26:55.968 00:26:55.968 Persistent Memory Region Support 00:26:55.968 ================================ 00:26:55.968 Supported: No 00:26:55.968 00:26:55.968 Admin Command Set Attributes 00:26:55.968 ============================ 00:26:55.968 Security Send/Receive: Not Supported 00:26:55.968 Format NVM: Not Supported 00:26:55.968 Firmware Activate/Download: Not Supported 00:26:55.968 Namespace Management: Not Supported 00:26:55.968 Device Self-Test: Not Supported 00:26:55.968 Directives: Not Supported 00:26:55.968 NVMe-MI: Not Supported 00:26:55.968 Virtualization Management: Not Supported 00:26:55.968 Doorbell Buffer Config: Not Supported 00:26:55.968 Get LBA Status Capability: Not Supported 00:26:55.968 Command & Feature Lockdown Capability: Not Supported 00:26:55.968 Abort Command Limit: 4 00:26:55.968 Async Event Request Limit: 4 00:26:55.968 Number of Firmware Slots: N/A 00:26:55.968 Firmware Slot 1 Read-Only: N/A 00:26:55.968 Firmware Activation Without Reset: N/A 00:26:55.968 Multiple Update Detection Support: N/A 00:26:55.968 Firmware Update Granularity: No Information Provided 00:26:55.968 Per-Namespace SMART Log: Yes 00:26:55.968 Asymmetric Namespace Access Log Page: Supported 00:26:55.968 ANA Transition Time : 10 sec 00:26:55.968 00:26:55.968 Asymmetric Namespace Access Capabilities 00:26:55.968 ANA Optimized State : Supported 00:26:55.968 ANA Non-Optimized State : Supported 00:26:55.968 ANA Inaccessible State : Supported 00:26:55.968 ANA Persistent Loss State : Supported 00:26:55.968 ANA Change State : Supported 00:26:55.968 ANAGRPID is not changed : No 00:26:55.968 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:55.968 00:26:55.968 ANA Group Identifier Maximum : 128 00:26:55.968 Number of ANA Group Identifiers : 128 00:26:55.968 Max Number of Allowed Namespaces : 1024 00:26:55.968 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:55.968 Command Effects Log Page: Supported 00:26:55.968 Get Log Page Extended Data: Supported 00:26:55.968 Telemetry Log Pages: Not Supported 00:26:55.968 Persistent Event Log Pages: Not Supported 00:26:55.968 Supported Log Pages Log Page: May Support 00:26:55.968 Commands Supported & Effects Log Page: Not Supported 00:26:55.968 Feature Identifiers & Effects Log Page:May Support 00:26:55.968 NVMe-MI Commands & Effects Log Page: May Support 00:26:55.968 Data Area 4 for Telemetry Log: Not Supported 00:26:55.968 Error Log Page Entries Supported: 128 00:26:55.968 Keep Alive: Supported 00:26:55.968 Keep Alive Granularity: 1000 ms 00:26:55.968 00:26:55.968 NVM Command Set Attributes 00:26:55.968 ========================== 00:26:55.968 Submission Queue Entry Size 00:26:55.968 Max: 64 00:26:55.968 Min: 64 00:26:55.968 Completion Queue Entry Size 00:26:55.968 Max: 16 00:26:55.968 Min: 16 00:26:55.968 Number of Namespaces: 1024 00:26:55.968 Compare Command: Not Supported 00:26:55.968 Write Uncorrectable Command: Not Supported 00:26:55.968 Dataset Management Command: Supported 00:26:55.968 Write Zeroes Command: Supported 00:26:55.968 Set Features Save Field: Not Supported 00:26:55.968 Reservations: Not Supported 00:26:55.968 Timestamp: Not Supported 00:26:55.968 Copy: Not Supported 00:26:55.968 Volatile Write Cache: Present 00:26:55.968 Atomic Write Unit (Normal): 1 00:26:55.968 Atomic Write Unit (PFail): 1 00:26:55.968 Atomic Compare & Write Unit: 1 00:26:55.968 Fused Compare & Write: Not Supported 00:26:55.968 Scatter-Gather List 00:26:55.968 SGL Command Set: Supported 00:26:55.968 SGL Keyed: Not Supported 00:26:55.968 SGL Bit Bucket Descriptor: Not Supported 00:26:55.968 SGL Metadata Pointer: Not Supported 00:26:55.968 Oversized SGL: Not Supported 00:26:55.968 SGL Metadata Address: Not Supported 00:26:55.968 SGL Offset: Supported 00:26:55.968 Transport SGL Data Block: Not Supported 00:26:55.968 Replay Protected Memory Block: Not Supported 00:26:55.968 00:26:55.968 Firmware Slot Information 00:26:55.968 ========================= 00:26:55.968 Active slot: 0 00:26:55.968 00:26:55.968 Asymmetric Namespace Access 00:26:55.968 =========================== 00:26:55.968 Change Count : 0 00:26:55.968 Number of ANA Group Descriptors : 1 00:26:55.968 ANA Group Descriptor : 0 00:26:55.968 ANA Group ID : 1 00:26:55.968 Number of NSID Values : 1 00:26:55.968 Change Count : 0 00:26:55.968 ANA State : 1 00:26:55.968 Namespace Identifier : 1 00:26:55.968 00:26:55.968 Commands Supported and Effects 00:26:55.968 ============================== 00:26:55.968 Admin Commands 00:26:55.968 -------------- 00:26:55.968 Get Log Page (02h): Supported 00:26:55.968 Identify (06h): Supported 00:26:55.968 Abort (08h): Supported 00:26:55.968 Set Features (09h): Supported 00:26:55.968 Get Features (0Ah): Supported 00:26:55.968 Asynchronous Event Request (0Ch): Supported 00:26:55.968 Keep Alive (18h): Supported 00:26:55.968 I/O Commands 00:26:55.968 ------------ 00:26:55.968 Flush (00h): Supported 00:26:55.968 Write (01h): Supported LBA-Change 00:26:55.968 Read (02h): Supported 00:26:55.968 Write Zeroes (08h): Supported LBA-Change 00:26:55.968 Dataset Management (09h): Supported 00:26:55.968 00:26:55.968 Error Log 00:26:55.968 ========= 00:26:55.968 Entry: 0 00:26:55.968 Error Count: 0x3 00:26:55.968 Submission Queue Id: 0x0 00:26:55.968 Command Id: 0x5 00:26:55.968 Phase Bit: 0 00:26:55.968 Status Code: 0x2 00:26:55.968 Status Code Type: 0x0 00:26:55.968 Do Not Retry: 1 00:26:55.968 Error Location: 0x28 00:26:55.968 LBA: 0x0 00:26:55.968 Namespace: 0x0 00:26:55.968 Vendor Log Page: 0x0 00:26:55.968 ----------- 00:26:55.968 Entry: 1 00:26:55.968 Error Count: 0x2 00:26:55.968 Submission Queue Id: 0x0 00:26:55.968 Command Id: 0x5 00:26:55.968 Phase Bit: 0 00:26:55.968 Status Code: 0x2 00:26:55.968 Status Code Type: 0x0 00:26:55.968 Do Not Retry: 1 00:26:55.968 Error Location: 0x28 00:26:55.968 LBA: 0x0 00:26:55.968 Namespace: 0x0 00:26:55.968 Vendor Log Page: 0x0 00:26:55.968 ----------- 00:26:55.968 Entry: 2 00:26:55.968 Error Count: 0x1 00:26:55.968 Submission Queue Id: 0x0 00:26:55.968 Command Id: 0x4 00:26:55.968 Phase Bit: 0 00:26:55.968 Status Code: 0x2 00:26:55.968 Status Code Type: 0x0 00:26:55.968 Do Not Retry: 1 00:26:55.968 Error Location: 0x28 00:26:55.968 LBA: 0x0 00:26:55.968 Namespace: 0x0 00:26:55.968 Vendor Log Page: 0x0 00:26:55.968 00:26:55.968 Number of Queues 00:26:55.968 ================ 00:26:55.968 Number of I/O Submission Queues: 128 00:26:55.968 Number of I/O Completion Queues: 128 00:26:55.968 00:26:55.968 ZNS Specific Controller Data 00:26:55.968 ============================ 00:26:55.968 Zone Append Size Limit: 0 00:26:55.968 00:26:55.968 00:26:55.968 Active Namespaces 00:26:55.968 ================= 00:26:55.968 get_feature(0x05) failed 00:26:55.968 Namespace ID:1 00:26:55.968 Command Set Identifier: NVM (00h) 00:26:55.968 Deallocate: Supported 00:26:55.968 Deallocated/Unwritten Error: Not Supported 00:26:55.968 Deallocated Read Value: Unknown 00:26:55.968 Deallocate in Write Zeroes: Not Supported 00:26:55.968 Deallocated Guard Field: 0xFFFF 00:26:55.968 Flush: Supported 00:26:55.969 Reservation: Not Supported 00:26:55.969 Namespace Sharing Capabilities: Multiple Controllers 00:26:55.969 Size (in LBAs): 1953525168 (931GiB) 00:26:55.969 Capacity (in LBAs): 1953525168 (931GiB) 00:26:55.969 Utilization (in LBAs): 1953525168 (931GiB) 00:26:55.969 UUID: d125aa39-28c1-4eaf-ad41-8328c68c5e4a 00:26:55.969 Thin Provisioning: Not Supported 00:26:55.969 Per-NS Atomic Units: Yes 00:26:55.969 Atomic Boundary Size (Normal): 0 00:26:55.969 Atomic Boundary Size (PFail): 0 00:26:55.969 Atomic Boundary Offset: 0 00:26:55.969 NGUID/EUI64 Never Reused: No 00:26:55.969 ANA group ID: 1 00:26:55.969 Namespace Write Protected: No 00:26:55.969 Number of LBA Formats: 1 00:26:55.969 Current LBA Format: LBA Format #00 00:26:55.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:55.969 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:55.969 rmmod nvme_tcp 00:26:55.969 rmmod nvme_fabrics 00:26:55.969 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.227 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.228 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.228 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.228 20:25:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:58.132 20:25:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:00.662 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:00.662 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:00.920 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:00.920 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:00.920 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:00.920 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:01.858 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:01.858 00:27:01.858 real 0m15.520s 00:27:01.858 user 0m3.602s 00:27:01.858 sys 0m8.071s 00:27:01.858 20:25:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:01.858 20:25:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.858 ************************************ 00:27:01.858 END TEST nvmf_identify_kernel_target 00:27:01.858 ************************************ 00:27:01.858 20:25:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:01.858 20:25:27 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:01.858 20:25:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.858 20:25:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.858 20:25:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.858 ************************************ 00:27:01.858 START TEST nvmf_auth_host 00:27:01.858 ************************************ 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:01.858 * Looking for test storage... 00:27:01.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.858 20:25:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:07.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:07.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:07.149 Found net devices under 0000:af:00.0: cvl_0_0 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:07.149 Found net devices under 0000:af:00.1: cvl_0_1 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.149 20:25:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.149 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.149 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.149 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.149 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.149 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:27:07.150 00:27:07.150 --- 10.0.0.2 ping statistics --- 00:27:07.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.150 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:27:07.150 00:27:07.150 --- 10.0.0.1 ping statistics --- 00:27:07.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.150 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=185489 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 185489 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 185489 ']' 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.150 20:25:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9ec6bc8701e83e9c72477b22873d2619 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.0aO 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9ec6bc8701e83e9c72477b22873d2619 0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9ec6bc8701e83e9c72477b22873d2619 0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9ec6bc8701e83e9c72477b22873d2619 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.0aO 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.0aO 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0aO 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f72b3c02bc9e2d6c02dc77daaba7fa3284d59a5bc69eb08bb2b78630c865558 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tTK 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f72b3c02bc9e2d6c02dc77daaba7fa3284d59a5bc69eb08bb2b78630c865558 3 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f72b3c02bc9e2d6c02dc77daaba7fa3284d59a5bc69eb08bb2b78630c865558 3 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f72b3c02bc9e2d6c02dc77daaba7fa3284d59a5bc69eb08bb2b78630c865558 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tTK 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tTK 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tTK 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b6578ccbc17883916b98d9a002e6d5e720e63fb4f2e6d70 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SPc 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b6578ccbc17883916b98d9a002e6d5e720e63fb4f2e6d70 0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b6578ccbc17883916b98d9a002e6d5e720e63fb4f2e6d70 0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b6578ccbc17883916b98d9a002e6d5e720e63fb4f2e6d70 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SPc 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SPc 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SPc 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c880ffdb21bdd30902f9df6fefa034ad6ab60e31f5fa3d96 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2Qb 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c880ffdb21bdd30902f9df6fefa034ad6ab60e31f5fa3d96 2 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c880ffdb21bdd30902f9df6fefa034ad6ab60e31f5fa3d96 2 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c880ffdb21bdd30902f9df6fefa034ad6ab60e31f5fa3d96 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2Qb 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2Qb 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2Qb 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc1fa3c5ceacf96b1d38405a51406b83 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1of 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc1fa3c5ceacf96b1d38405a51406b83 1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc1fa3c5ceacf96b1d38405a51406b83 1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc1fa3c5ceacf96b1d38405a51406b83 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:08.528 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1of 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1of 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1of 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f5067593de08e52cd3ecd8a7480dd697 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ov4 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f5067593de08e52cd3ecd8a7480dd697 1 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f5067593de08e52cd3ecd8a7480dd697 1 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f5067593de08e52cd3ecd8a7480dd697 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ov4 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ov4 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ov4 00:27:08.787 20:25:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=005c74319fc77d82ed74c9f3d1d8f81cc26695475f84816a 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YB7 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 005c74319fc77d82ed74c9f3d1d8f81cc26695475f84816a 2 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 005c74319fc77d82ed74c9f3d1d8f81cc26695475f84816a 2 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=005c74319fc77d82ed74c9f3d1d8f81cc26695475f84816a 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:08.788 20:25:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YB7 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YB7 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YB7 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e8d9bb5bc1da8f32b2bfdd52048ed781 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PAP 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e8d9bb5bc1da8f32b2bfdd52048ed781 0 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e8d9bb5bc1da8f32b2bfdd52048ed781 0 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e8d9bb5bc1da8f32b2bfdd52048ed781 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PAP 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PAP 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.PAP 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73b81685d8be069e19c813a6c8062f1d94c786937f674e635053f14301a3de66 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NsM 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73b81685d8be069e19c813a6c8062f1d94c786937f674e635053f14301a3de66 3 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73b81685d8be069e19c813a6c8062f1d94c786937f674e635053f14301a3de66 3 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73b81685d8be069e19c813a6c8062f1d94c786937f674e635053f14301a3de66 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:08.788 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NsM 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NsM 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NsM 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 185489 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 185489 ']' 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:09.046 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.304 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.304 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:09.304 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:09.304 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0aO 00:27:09.304 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.304 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tTK ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tTK 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SPc 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2Qb ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Qb 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1of 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ov4 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ov4 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YB7 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.PAP ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.PAP 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NsM 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:09.562 20:25:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:12.098 Waiting for block devices as requested 00:27:12.098 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:12.098 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:12.098 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:12.098 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:12.430 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:12.430 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:12.430 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:12.430 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:12.697 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:12.697 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:12.697 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:12.697 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:12.955 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:12.955 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:12.955 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:13.213 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:13.213 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:13.779 No valid GPT data, bailing 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:13.779 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:14.037 00:27:14.037 Discovery Log Number of Records 2, Generation counter 2 00:27:14.037 =====Discovery Log Entry 0====== 00:27:14.037 trtype: tcp 00:27:14.037 adrfam: ipv4 00:27:14.037 subtype: current discovery subsystem 00:27:14.037 treq: not specified, sq flow control disable supported 00:27:14.037 portid: 1 00:27:14.037 trsvcid: 4420 00:27:14.037 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:14.037 traddr: 10.0.0.1 00:27:14.037 eflags: none 00:27:14.037 sectype: none 00:27:14.037 =====Discovery Log Entry 1====== 00:27:14.037 trtype: tcp 00:27:14.037 adrfam: ipv4 00:27:14.037 subtype: nvme subsystem 00:27:14.037 treq: not specified, sq flow control disable supported 00:27:14.037 portid: 1 00:27:14.037 trsvcid: 4420 00:27:14.037 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:14.037 traddr: 10.0.0.1 00:27:14.037 eflags: none 00:27:14.037 sectype: none 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.037 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.038 nvme0n1 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.038 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 nvme0n1 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.296 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.555 nvme0n1 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.555 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.814 20:25:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.814 nvme0n1 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:14.814 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.815 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.073 nvme0n1 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.073 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.331 nvme0n1 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.332 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.590 nvme0n1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.590 20:25:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.847 nvme0n1 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.847 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.848 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.105 nvme0n1 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.105 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.106 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.363 nvme0n1 00:27:16.363 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.363 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.363 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.363 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.363 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.364 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.622 nvme0n1 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.622 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.623 20:25:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 nvme0n1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.187 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.445 nvme0n1 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.445 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.446 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.703 nvme0n1 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.703 20:25:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.703 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.704 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.962 nvme0n1 00:27:17.962 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.962 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.962 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.962 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.962 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.221 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.479 nvme0n1 00:27:18.479 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.479 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.480 20:25:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.045 nvme0n1 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:19.045 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.046 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.612 nvme0n1 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.612 20:25:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.180 nvme0n1 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.180 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 nvme0n1 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.747 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.748 20:25:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 nvme0n1 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.315 20:25:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 nvme0n1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.252 20:25:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 nvme0n1 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.821 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.081 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.649 nvme0n1 00:27:23.649 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.649 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.909 20:25:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.909 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.909 20:25:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.909 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.846 nvme0n1 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.846 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.847 20:25:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 nvme0n1 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 nvme0n1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.784 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.043 nvme0n1 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.043 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.044 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 nvme0n1 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.303 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 nvme0n1 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.562 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.820 nvme0n1 00:27:26.820 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.821 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 20:25:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.821 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 20:25:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.821 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 nvme0n1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.080 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 nvme0n1 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.340 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.599 nvme0n1 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.599 20:25:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.858 nvme0n1 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.858 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.859 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.859 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.859 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.859 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.859 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 nvme0n1 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.123 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.382 nvme0n1 00:27:28.382 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.382 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.382 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.382 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.382 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.382 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.641 20:25:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.900 nvme0n1 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.900 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.901 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.160 nvme0n1 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.160 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 nvme0n1 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.728 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.729 20:25:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.987 nvme0n1 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.987 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.988 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.555 nvme0n1 00:27:30.555 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.555 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.556 20:25:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.124 nvme0n1 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.124 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.693 nvme0n1 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.693 20:25:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.263 nvme0n1 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.263 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.831 nvme0n1 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:32.831 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.832 20:25:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.768 nvme0n1 00:27:33.768 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.768 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.768 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.769 20:25:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.337 nvme0n1 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.337 20:25:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 nvme0n1 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.273 20:26:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.207 nvme0n1 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.207 20:26:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.773 nvme0n1 00:27:36.773 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.773 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.773 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.773 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.773 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.030 nvme0n1 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.030 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.288 nvme0n1 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:37.288 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.289 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.547 nvme0n1 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.547 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.548 20:26:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.548 20:26:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.548 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.548 20:26:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.807 nvme0n1 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.807 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.067 nvme0n1 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.067 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.068 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.327 nvme0n1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.327 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.587 nvme0n1 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.587 20:26:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.588 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.588 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.588 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.851 nvme0n1 00:27:38.851 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.851 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.851 20:26:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.851 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.851 20:26:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.851 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.111 nvme0n1 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.111 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 nvme0n1 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:39.371 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.372 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 nvme0n1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.633 20:26:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.892 nvme0n1 00:27:39.892 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.892 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.892 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.892 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.892 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.893 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.893 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.893 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.893 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.893 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.151 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.152 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.152 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.152 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.152 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.152 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.427 nvme0n1 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.427 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.724 nvme0n1 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.724 20:26:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.002 nvme0n1 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.002 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.583 nvme0n1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.583 20:26:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.151 nvme0n1 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.151 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.152 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 nvme0n1 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.719 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.720 20:26:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.288 nvme0n1 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.288 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.857 nvme0n1 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVjNmJjODcwMWU4M2U5YzcyNDc3YjIyODczZDI2MTn8kQce: 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: ]] 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY3MmIzYzAyYmM5ZTJkNmMwMmRjNzdkYWFiYTdmYTMyODRkNTlhNWJjNjllYjA4YmIyYjc4NjMwYzg2NTU1ON6EevI=: 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.857 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.858 20:26:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.425 nvme0n1 00:27:44.425 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.425 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.425 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.425 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.425 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.425 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.683 20:26:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 nvme0n1 00:27:45.252 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.252 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.252 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.252 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.252 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.252 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2MxZmEzYzVjZWFjZjk2YjFkMzg0MDVhNTE0MDZiODMFh977: 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjUwNjc1OTNkZTA4ZTUyY2QzZWNkOGE3NDgwZGQ2OTdk2rAn: 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.511 20:26:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.447 nvme0n1 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDA1Yzc0MzE5ZmM3N2Q4MmVkNzRjOWYzZDFkOGY4MWNjMjY2OTU0NzVmODQ4MTZhM7HJxg==: 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZThkOWJiNWJjMWRhOGYzMmIyYmZkZDUyMDQ4ZWQ3ODHyVs7F: 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.447 20:26:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.017 nvme0n1 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.017 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzNiODE2ODVkOGJlMDY5ZTE5YzgxM2E2YzgwNjJmMWQ5NGM3ODY5MzdmNjc0ZTYzNTA1M2YxNDMwMWEzZGU2NuXpLOY=: 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.277 20:26:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.845 nvme0n1 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.845 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWI2NTc4Y2NiYzE3ODgzOTE2Yjk4ZDlhMDAyZTZkNWU3MjBlNjNmYjRmMmU2ZDcw8AYvdw==: 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg4MGZmZGIyMWJkZDMwOTAyZjlkZjZmZWZhMDM0YWQ2YWI2MGUzMWY1ZmEzZDk2JenTYg==: 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.108 request: 00:27:48.108 { 00:27:48.108 "name": "nvme0", 00:27:48.108 "trtype": "tcp", 00:27:48.108 "traddr": "10.0.0.1", 00:27:48.108 "adrfam": "ipv4", 00:27:48.108 "trsvcid": "4420", 00:27:48.108 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:48.108 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:48.108 "prchk_reftag": false, 00:27:48.108 "prchk_guard": false, 00:27:48.108 "hdgst": false, 00:27:48.108 "ddgst": false, 00:27:48.108 "method": "bdev_nvme_attach_controller", 00:27:48.108 "req_id": 1 00:27:48.108 } 00:27:48.108 Got JSON-RPC error response 00:27:48.108 response: 00:27:48.108 { 00:27:48.108 "code": -5, 00:27:48.108 "message": "Input/output error" 00:27:48.108 } 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.108 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.109 request: 00:27:48.109 { 00:27:48.109 "name": "nvme0", 00:27:48.109 "trtype": "tcp", 00:27:48.109 "traddr": "10.0.0.1", 00:27:48.109 "adrfam": "ipv4", 00:27:48.109 "trsvcid": "4420", 00:27:48.109 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:48.109 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:48.109 "prchk_reftag": false, 00:27:48.109 "prchk_guard": false, 00:27:48.109 "hdgst": false, 00:27:48.109 "ddgst": false, 00:27:48.109 "dhchap_key": "key2", 00:27:48.109 "method": "bdev_nvme_attach_controller", 00:27:48.109 "req_id": 1 00:27:48.109 } 00:27:48.109 Got JSON-RPC error response 00:27:48.109 response: 00:27:48.109 { 00:27:48.109 "code": -5, 00:27:48.109 "message": "Input/output error" 00:27:48.109 } 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.109 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.370 request: 00:27:48.370 { 00:27:48.370 "name": "nvme0", 00:27:48.370 "trtype": "tcp", 00:27:48.370 "traddr": "10.0.0.1", 00:27:48.370 "adrfam": "ipv4", 00:27:48.370 "trsvcid": "4420", 00:27:48.370 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:48.370 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:48.370 "prchk_reftag": false, 00:27:48.370 "prchk_guard": false, 00:27:48.370 "hdgst": false, 00:27:48.370 "ddgst": false, 00:27:48.370 "dhchap_key": "key1", 00:27:48.370 "dhchap_ctrlr_key": "ckey2", 00:27:48.370 "method": "bdev_nvme_attach_controller", 00:27:48.370 "req_id": 1 00:27:48.370 } 00:27:48.370 Got JSON-RPC error response 00:27:48.370 response: 00:27:48.370 { 00:27:48.370 "code": -5, 00:27:48.370 "message": "Input/output error" 00:27:48.370 } 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.370 rmmod nvme_tcp 00:27:48.370 rmmod nvme_fabrics 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 185489 ']' 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 185489 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 185489 ']' 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 185489 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 185489 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 185489' 00:27:48.370 killing process with pid 185489 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 185489 00:27:48.370 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 185489 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.629 20:26:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:50.531 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:50.789 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:50.789 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:50.789 20:26:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.322 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:53.322 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:54.257 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:54.257 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0aO /tmp/spdk.key-null.SPc /tmp/spdk.key-sha256.1of /tmp/spdk.key-sha384.YB7 /tmp/spdk.key-sha512.NsM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:54.257 20:26:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.546 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:57.546 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:57.546 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:57.546 00:27:57.546 real 0m55.338s 00:27:57.546 user 0m50.974s 00:27:57.546 sys 0m11.451s 00:27:57.546 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.546 20:26:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.546 ************************************ 00:27:57.546 END TEST nvmf_auth_host 00:27:57.546 ************************************ 00:27:57.546 20:26:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:57.546 20:26:22 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:27:57.546 20:26:22 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:57.546 20:26:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:57.546 20:26:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.546 20:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:57.546 ************************************ 00:27:57.546 START TEST nvmf_digest 00:27:57.546 ************************************ 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:57.546 * Looking for test storage... 00:27:57.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.546 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.547 20:26:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:02.828 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:02.828 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:02.828 Found net devices under 0000:af:00.0: cvl_0_0 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:02.828 Found net devices under 0000:af:00.1: cvl_0_1 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:02.828 20:26:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:02.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:28:02.828 00:28:02.828 --- 10.0.0.2 ping statistics --- 00:28:02.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.828 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:02.828 00:28:02.828 --- 10.0.0.1 ping statistics --- 00:28:02.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.828 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:02.828 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.829 ************************************ 00:28:02.829 START TEST nvmf_digest_clean 00:28:02.829 ************************************ 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:02.829 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=200195 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 200195 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 200195 ']' 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.088 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.088 [2024-07-15 20:26:28.234249] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:03.088 [2024-07-15 20:26:28.234312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.088 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.088 [2024-07-15 20:26:28.322215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.089 [2024-07-15 20:26:28.411561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.089 [2024-07-15 20:26:28.411603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.089 [2024-07-15 20:26:28.411613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.089 [2024-07-15 20:26:28.411622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.089 [2024-07-15 20:26:28.411629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.089 [2024-07-15 20:26:28.411650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.089 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.089 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:03.089 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.089 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:03.089 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.349 null0 00:28:03.349 [2024-07-15 20:26:28.563833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.349 [2024-07-15 20:26:28.588008] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=200219 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 200219 /var/tmp/bperf.sock 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 200219 ']' 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.349 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.349 [2024-07-15 20:26:28.632969] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:03.349 [2024-07-15 20:26:28.633022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200219 ] 00:28:03.349 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.608 [2024-07-15 20:26:28.702775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.608 [2024-07-15 20:26:28.793999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.608 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.608 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:03.608 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.608 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.608 20:26:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.868 20:26:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.868 20:26:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.436 nvme0n1 00:28:04.436 20:26:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.436 20:26:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.436 Running I/O for 2 seconds... 00:28:06.343 00:28:06.343 Latency(us) 00:28:06.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.343 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:06.343 nvme0n1 : 2.00 17340.33 67.74 0.00 0.00 7371.33 3902.37 15966.95 00:28:06.343 =================================================================================================================== 00:28:06.343 Total : 17340.33 67.74 0.00 0.00 7371.33 3902.37 15966.95 00:28:06.343 0 00:28:06.343 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:06.343 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:06.343 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:06.343 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:06.343 | select(.opcode=="crc32c") 00:28:06.343 | "\(.module_name) \(.executed)"' 00:28:06.343 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 200219 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 200219 ']' 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 200219 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:06.603 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200219 00:28:06.863 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:06.863 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:06.863 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200219' 00:28:06.863 killing process with pid 200219 00:28:06.863 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 200219 00:28:06.863 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.863 00:28:06.863 Latency(us) 00:28:06.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.863 =================================================================================================================== 00:28:06.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.863 20:26:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 200219 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=200977 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 200977 /var/tmp/bperf.sock 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 200977 ']' 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.863 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.122 [2024-07-15 20:26:32.225425] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:07.122 [2024-07-15 20:26:32.225486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200977 ] 00:28:07.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.122 Zero copy mechanism will not be used. 00:28:07.122 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.122 [2024-07-15 20:26:32.314710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.122 [2024-07-15 20:26:32.438271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.381 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.381 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:07.381 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:07.381 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:07.381 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.641 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.641 20:26:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.900 nvme0n1 00:28:07.900 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:07.900 20:26:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.900 Zero copy mechanism will not be used. 00:28:07.900 Running I/O for 2 seconds... 00:28:09.804 00:28:09.804 Latency(us) 00:28:09.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:09.804 nvme0n1 : 2.00 3889.94 486.24 0.00 0.00 4108.90 919.74 10068.71 00:28:09.804 =================================================================================================================== 00:28:09.804 Total : 3889.94 486.24 0.00 0.00 4108.90 919.74 10068.71 00:28:09.804 0 00:28:10.062 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:10.062 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:10.062 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:10.062 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:10.062 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:10.062 | select(.opcode=="crc32c") 00:28:10.062 | "\(.module_name) \(.executed)"' 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 200977 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 200977 ']' 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 200977 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200977 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200977' 00:28:10.322 killing process with pid 200977 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 200977 00:28:10.322 Received shutdown signal, test time was about 2.000000 seconds 00:28:10.322 00:28:10.322 Latency(us) 00:28:10.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.322 =================================================================================================================== 00:28:10.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 200977 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:10.322 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=201540 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 201540 /var/tmp/bperf.sock 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 201540 ']' 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.582 20:26:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.582 [2024-07-15 20:26:35.718370] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:10.582 [2024-07-15 20:26:35.718432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201540 ] 00:28:10.582 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.582 [2024-07-15 20:26:35.792306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.582 [2024-07-15 20:26:35.882701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.519 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.519 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:11.519 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:11.519 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:11.519 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.778 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.778 20:26:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.037 nvme0n1 00:28:12.037 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:12.037 20:26:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.037 Running I/O for 2 seconds... 00:28:14.598 00:28:14.598 Latency(us) 00:28:14.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.599 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:14.599 nvme0n1 : 2.01 18595.41 72.64 0.00 0.00 6871.22 3500.22 16086.11 00:28:14.599 =================================================================================================================== 00:28:14.599 Total : 18595.41 72.64 0.00 0.00 6871.22 3500.22 16086.11 00:28:14.599 0 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:14.599 | select(.opcode=="crc32c") 00:28:14.599 | "\(.module_name) \(.executed)"' 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 201540 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 201540 ']' 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 201540 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 201540 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 201540' 00:28:14.599 killing process with pid 201540 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 201540 00:28:14.599 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.599 00:28:14.599 Latency(us) 00:28:14.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.599 =================================================================================================================== 00:28:14.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.599 20:26:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 201540 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=202333 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 202333 /var/tmp/bperf.sock 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 202333 ']' 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.857 20:26:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.857 [2024-07-15 20:26:40.148980] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:14.857 [2024-07-15 20:26:40.149045] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202333 ] 00:28:14.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.857 Zero copy mechanism will not be used. 00:28:14.857 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.116 [2024-07-15 20:26:40.221541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.116 [2024-07-15 20:26:40.305626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.684 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.684 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:15.684 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:15.684 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:15.684 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:16.254 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.254 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.512 nvme0n1 00:28:16.512 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:16.512 20:26:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.512 Zero copy mechanism will not be used. 00:28:16.512 Running I/O for 2 seconds... 00:28:19.057 00:28:19.057 Latency(us) 00:28:19.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.057 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:19.057 nvme0n1 : 2.00 4853.79 606.72 0.00 0.00 3288.89 2576.76 9532.51 00:28:19.057 =================================================================================================================== 00:28:19.057 Total : 4853.79 606.72 0.00 0.00 3288.89 2576.76 9532.51 00:28:19.057 0 00:28:19.057 20:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:19.057 20:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:19.057 20:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.057 20:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.057 | select(.opcode=="crc32c") 00:28:19.057 | "\(.module_name) \(.executed)"' 00:28:19.057 20:26:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 202333 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 202333 ']' 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 202333 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202333 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202333' 00:28:19.057 killing process with pid 202333 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 202333 00:28:19.057 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.057 00:28:19.057 Latency(us) 00:28:19.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.057 =================================================================================================================== 00:28:19.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 202333 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 200195 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 200195 ']' 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 200195 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:19.057 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200195 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200195' 00:28:19.332 killing process with pid 200195 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 200195 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 200195 00:28:19.332 00:28:19.332 real 0m16.464s 00:28:19.332 user 0m33.229s 00:28:19.332 sys 0m4.185s 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.332 ************************************ 00:28:19.332 END TEST nvmf_digest_clean 00:28:19.332 ************************************ 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.332 20:26:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.622 ************************************ 00:28:19.622 START TEST nvmf_digest_error 00:28:19.622 ************************************ 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=203158 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 203158 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 203158 ']' 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:19.622 20:26:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.622 [2024-07-15 20:26:44.766286] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:19.622 [2024-07-15 20:26:44.766343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.622 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.622 [2024-07-15 20:26:44.853084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.622 [2024-07-15 20:26:44.938988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.622 [2024-07-15 20:26:44.939032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.622 [2024-07-15 20:26:44.939042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.622 [2024-07-15 20:26:44.939050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.622 [2024-07-15 20:26:44.939057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.622 [2024-07-15 20:26:44.939084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.559 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.559 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.560 [2024-07-15 20:26:45.725447] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.560 null0 00:28:20.560 [2024-07-15 20:26:45.820645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.560 [2024-07-15 20:26:45.844827] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=203431 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 203431 /var/tmp/bperf.sock 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 203431 ']' 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.560 20:26:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.560 [2024-07-15 20:26:45.889079] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:20.560 [2024-07-15 20:26:45.889135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203431 ] 00:28:20.819 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.819 [2024-07-15 20:26:45.959557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.819 [2024-07-15 20:26:46.050340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.819 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.819 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:20.819 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.819 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.078 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.078 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.078 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.078 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.078 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.078 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.337 nvme0n1 00:28:21.337 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:21.337 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.337 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.337 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.337 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.337 20:26:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.596 Running I/O for 2 seconds... 00:28:21.596 [2024-07-15 20:26:46.732890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.732930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.732946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.749194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.749224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.749241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.763149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.763177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.763190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.775764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.775792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.775804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.790975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.791002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.791014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.806202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.806229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.806241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.819127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.819153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.819165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.835056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.835083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.835094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.850297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.850324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.850337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.862247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.862278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.862290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.878150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.596 [2024-07-15 20:26:46.878181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.596 [2024-07-15 20:26:46.878193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.596 [2024-07-15 20:26:46.892615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.597 [2024-07-15 20:26:46.892642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.597 [2024-07-15 20:26:46.892654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.597 [2024-07-15 20:26:46.907457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.597 [2024-07-15 20:26:46.907486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.597 [2024-07-15 20:26:46.907497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.597 [2024-07-15 20:26:46.920007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.597 [2024-07-15 20:26:46.920033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.597 [2024-07-15 20:26:46.920045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.597 [2024-07-15 20:26:46.936230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.597 [2024-07-15 20:26:46.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.597 [2024-07-15 20:26:46.936276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.856 [2024-07-15 20:26:46.949019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.856 [2024-07-15 20:26:46.949046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.856 [2024-07-15 20:26:46.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.856 [2024-07-15 20:26:46.967013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.856 [2024-07-15 20:26:46.967041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.856 [2024-07-15 20:26:46.967054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.856 [2024-07-15 20:26:46.983718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.856 [2024-07-15 20:26:46.983745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.856 [2024-07-15 20:26:46.983757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.856 [2024-07-15 20:26:46.996503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:46.996530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:46.996547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.010631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.010669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.025898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.025925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.025937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.038556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.038582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.056940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.056967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.056979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.074214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.074241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.074252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.087122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.087149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.087161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.104893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.104921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.104933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.122558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.122585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.122596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.135552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.135583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.135595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.149128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.149154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.149166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.165063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.165089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.181165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.181192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.181204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.857 [2024-07-15 20:26:47.196539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:21.857 [2024-07-15 20:26:47.196565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.857 [2024-07-15 20:26:47.196577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.116 [2024-07-15 20:26:47.209350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.116 [2024-07-15 20:26:47.209378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.116 [2024-07-15 20:26:47.209390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.116 [2024-07-15 20:26:47.224045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.116 [2024-07-15 20:26:47.224072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.116 [2024-07-15 20:26:47.224084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.116 [2024-07-15 20:26:47.238911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.116 [2024-07-15 20:26:47.238937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.116 [2024-07-15 20:26:47.238948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.116 [2024-07-15 20:26:47.251744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.116 [2024-07-15 20:26:47.251770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.116 [2024-07-15 20:26:47.251781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.116 [2024-07-15 20:26:47.266777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.116 [2024-07-15 20:26:47.266804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.116 [2024-07-15 20:26:47.266816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.116 [2024-07-15 20:26:47.280699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.116 [2024-07-15 20:26:47.280725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.280736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.296124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.296151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.296162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.309113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.309140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.309152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.325916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.325942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.325954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.342789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.342816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.342827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.359663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.359690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.359701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.371858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.371884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.371896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.388657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.388687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.388704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.406410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.406436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.406448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.419850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.419875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.419887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.434953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.434979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.434991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.452397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.452423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.452435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.117 [2024-07-15 20:26:47.465089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.117 [2024-07-15 20:26:47.465115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.117 [2024-07-15 20:26:47.465127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.482447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.482475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.482487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.496706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.496732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.510067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.510094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.510105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.527401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.527431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.527443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.543776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.543802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.543814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.556038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.556064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.556075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.573172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.573199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.573211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.587627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.587653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.587664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.600766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.600804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.616160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.616186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.616198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.628339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.628365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.628376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.377 [2024-07-15 20:26:47.646352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.377 [2024-07-15 20:26:47.646378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.377 [2024-07-15 20:26:47.646389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.378 [2024-07-15 20:26:47.660462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.378 [2024-07-15 20:26:47.660489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.378 [2024-07-15 20:26:47.660500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.378 [2024-07-15 20:26:47.674530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.378 [2024-07-15 20:26:47.674556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.378 [2024-07-15 20:26:47.674568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.378 [2024-07-15 20:26:47.688355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.378 [2024-07-15 20:26:47.688381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.378 [2024-07-15 20:26:47.688393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.378 [2024-07-15 20:26:47.702129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.378 [2024-07-15 20:26:47.702155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.378 [2024-07-15 20:26:47.702168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.378 [2024-07-15 20:26:47.715647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.378 [2024-07-15 20:26:47.715673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.378 [2024-07-15 20:26:47.715685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.730670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.730698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.730710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.746454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.746481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.746492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.759297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.759323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.759335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.776639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.776668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.776681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.788982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.789008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.789020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.807149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.807176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.807188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.825230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.825261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.825273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.838320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.838346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.838358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.856375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.856402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.856414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.869141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.869168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.869179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.885840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.885865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.885877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.900236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.900269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.900281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.914278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.914304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.914316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.930398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.930424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.930436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.942797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.638 [2024-07-15 20:26:47.942823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-07-15 20:26:47.942834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.638 [2024-07-15 20:26:47.959871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.639 [2024-07-15 20:26:47.959897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.639 [2024-07-15 20:26:47.959909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.639 [2024-07-15 20:26:47.972812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.639 [2024-07-15 20:26:47.972838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.639 [2024-07-15 20:26:47.972849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.898 [2024-07-15 20:26:47.988856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.898 [2024-07-15 20:26:47.988882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.898 [2024-07-15 20:26:47.988894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.898 [2024-07-15 20:26:48.003972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.898 [2024-07-15 20:26:48.003999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.898 [2024-07-15 20:26:48.004011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.898 [2024-07-15 20:26:48.016908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.898 [2024-07-15 20:26:48.016934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.898 [2024-07-15 20:26:48.016946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.898 [2024-07-15 20:26:48.031372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.898 [2024-07-15 20:26:48.031398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.898 [2024-07-15 20:26:48.031420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.898 [2024-07-15 20:26:48.045880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.898 [2024-07-15 20:26:48.045905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.898 [2024-07-15 20:26:48.045917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.898 [2024-07-15 20:26:48.060634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.898 [2024-07-15 20:26:48.060658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.060671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.073726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.073751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.073763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.090885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.090911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.090923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.103882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.103908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.103919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.121848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.121874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.121885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.140958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.140985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.140996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.159788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.159815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.159827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.172753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.172784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.172795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.190563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.190590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.190601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.209968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.209995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.210007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.226895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.226920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.226932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.899 [2024-07-15 20:26:48.239717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:22.899 [2024-07-15 20:26:48.239743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-07-15 20:26:48.239754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.257086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.257112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.257124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.270251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.270282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.270294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.289244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.289279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.289291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.301827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.301853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.301864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.318701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.318726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.318737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.336707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.336733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.336746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.349404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.349429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.349440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.367985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.368012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.368023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.380285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.380311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.380323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.398457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.398483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.398495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.417745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.417772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.417784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.430325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.430352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.430365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.448147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.448175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.448192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.467386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.467413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.467424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.486166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.486193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.486205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.158 [2024-07-15 20:26:48.503801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.158 [2024-07-15 20:26:48.503827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.158 [2024-07-15 20:26:48.503839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.517951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.517978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.517990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.535673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.535699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.535710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.548650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.548676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.548687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.564114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.564140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.564152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.578859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.578885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.578897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.591313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.591339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.591351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.607709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.607736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.607747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.625881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.625908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.625919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.639245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.639280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.639293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.655447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.655474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.655486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.670054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.670080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.670092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.684325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.684352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.684363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.698374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.698400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.698412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 [2024-07-15 20:26:48.712064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x209f5c0) 00:28:23.418 [2024-07-15 20:26:48.712091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.418 [2024-07-15 20:26:48.712107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.418 00:28:23.418 Latency(us) 00:28:23.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.418 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:23.418 nvme0n1 : 2.01 16736.30 65.38 0.00 0.00 7639.62 4170.47 25499.46 00:28:23.418 =================================================================================================================== 00:28:23.418 Total : 16736.30 65.38 0.00 0.00 7639.62 4170.47 25499.46 00:28:23.418 0 00:28:23.418 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.418 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.418 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.418 | .driver_specific 00:28:23.418 | .nvme_error 00:28:23.418 | .status_code 00:28:23.418 | .command_transient_transport_error' 00:28:23.418 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 131 > 0 )) 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 203431 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 203431 ']' 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 203431 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.677 20:26:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203431 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203431' 00:28:23.936 killing process with pid 203431 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 203431 00:28:23.936 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.936 00:28:23.936 Latency(us) 00:28:23.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.936 =================================================================================================================== 00:28:23.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 203431 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=203973 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 203973 /var/tmp/bperf.sock 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 203973 ']' 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.936 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.936 [2024-07-15 20:26:49.276702] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:23.936 [2024-07-15 20:26:49.276760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203973 ] 00:28:23.936 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.936 Zero copy mechanism will not be used. 00:28:24.194 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.194 [2024-07-15 20:26:49.346975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.194 [2024-07-15 20:26:49.437825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.194 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.194 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:24.194 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.194 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.452 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:24.452 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.452 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.452 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.452 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.452 20:26:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.018 nvme0n1 00:28:25.018 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:25.019 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.019 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.019 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.019 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.019 20:26:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.019 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.019 Zero copy mechanism will not be used. 00:28:25.019 Running I/O for 2 seconds... 00:28:25.019 [2024-07-15 20:26:50.281937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.281980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.281999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.291346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.291390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.300628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.300657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.300670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.310045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.310073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.310085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.319072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.319099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.328170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.328198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.328210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.337706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.337734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.337746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.347648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.347676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.347688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.356466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.356495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.356507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.019 [2024-07-15 20:26:50.366476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.019 [2024-07-15 20:26:50.366505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.019 [2024-07-15 20:26:50.366517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.376685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.376716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.376728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.386005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.386049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.396635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.396666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.405683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.405713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.415220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.415249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.415269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.424263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.424291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.424302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.432873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.432900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.441032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.441059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.441075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.449052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.449079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.449091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.456483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.456520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.456532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.463923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.463951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.463962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.471422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.471449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.471460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.479170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.479197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.479208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.486623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.486650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.486660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.494009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.494036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.494047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.278 [2024-07-15 20:26:50.501478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.278 [2024-07-15 20:26:50.501505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.278 [2024-07-15 20:26:50.501515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.509223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.509260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.509272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.516657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.516684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.516695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.524041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.524067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.524078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.531511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.531538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.531550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.539215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.539243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.539261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.547874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.547900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.547911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.555418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.555444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.555455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.562801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.562828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.562839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.570071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.570097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.570108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.577192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.577230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.584350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.584377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.584388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.591548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.591574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.591586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.598792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.598818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.598829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.606032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.606059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.606070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.613515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.613542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.613553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.279 [2024-07-15 20:26:50.621154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.279 [2024-07-15 20:26:50.621181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.279 [2024-07-15 20:26:50.621192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.628707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.538 [2024-07-15 20:26:50.628734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.538 [2024-07-15 20:26:50.628745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.636319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.538 [2024-07-15 20:26:50.636344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.538 [2024-07-15 20:26:50.636360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.643742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.538 [2024-07-15 20:26:50.643768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.538 [2024-07-15 20:26:50.643779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.651417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.538 [2024-07-15 20:26:50.651443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.538 [2024-07-15 20:26:50.651455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.659102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.538 [2024-07-15 20:26:50.659129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.538 [2024-07-15 20:26:50.659140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.666714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.538 [2024-07-15 20:26:50.666741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.538 [2024-07-15 20:26:50.666752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.538 [2024-07-15 20:26:50.674088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.674115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.674126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.681518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.681545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.681555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.689224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.689251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.689269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.696788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.696814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.696826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.704119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.704149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.704161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.711499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.711526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.711537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.719127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.719153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.719163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.726742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.726769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.726780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.734091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.734118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.734129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.741489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.741515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.741526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.749146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.749171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.749182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.756714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.756741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.756752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.764433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.764460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.764471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.771610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.771637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.771648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.778879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.778905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.778916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.786421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.786447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.786459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.794110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.794137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.794148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.802752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.802790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.810132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.810170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.817541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.817566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.824909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.824935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.824946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.832037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.832064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.832079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.839338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.839364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.839375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.846825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.846851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.846863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.854540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.854567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.854578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.862260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.862287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.862298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.869774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.869800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.869811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.877164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.877191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.877202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.539 [2024-07-15 20:26:50.884590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.539 [2024-07-15 20:26:50.884617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.539 [2024-07-15 20:26:50.884629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.892009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.799 [2024-07-15 20:26:50.892037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.799 [2024-07-15 20:26:50.892049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.899282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.799 [2024-07-15 20:26:50.899308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.799 [2024-07-15 20:26:50.899319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.906390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.799 [2024-07-15 20:26:50.906416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.799 [2024-07-15 20:26:50.906427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.913536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.799 [2024-07-15 20:26:50.913563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.799 [2024-07-15 20:26:50.913574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.920935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.799 [2024-07-15 20:26:50.920962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.799 [2024-07-15 20:26:50.920973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.928485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.799 [2024-07-15 20:26:50.928513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.799 [2024-07-15 20:26:50.928524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.799 [2024-07-15 20:26:50.936204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.936232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.936242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.943588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.943614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.943625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.950944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.950970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.950981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.958451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.958478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.958493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.966295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.966322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.966333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.973996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.974023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.974034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.981767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.981794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.981805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.989590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.989617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.989628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:50.997528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:50.997555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:50.997566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.004925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.004952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.004963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.012580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.012607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.012618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.020340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.020366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.020377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.028010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.028052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.035321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.035348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.035359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.042696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.042723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.042734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.050278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.050303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.050315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.059000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.059027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.059039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.066194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.066221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.066232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.073499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.073526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.073538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.080899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.080926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.080938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.088265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.088291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.088303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.095526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.095553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.095565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.102724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.102751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.102763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.110164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.110191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.110202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.117837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.117863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.117875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.125532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.125559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.125570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.133235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.133267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.133279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.800 [2024-07-15 20:26:51.140988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:25.800 [2024-07-15 20:26:51.141014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.800 [2024-07-15 20:26:51.141025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.061 [2024-07-15 20:26:51.148890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.148918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.148930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.156320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.156346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.163746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.163773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.163784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.171246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.171278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.171289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.179077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.179104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.179116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.187379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.187408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.187419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.195610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.195639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.195651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.204433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.204462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.204474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.212669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.212698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.212710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.220822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.220850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.220862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.228521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.228552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.228564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.236469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.236496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.236507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.244145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.244171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.244182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.252229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.252262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.252274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.260569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.260598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.260609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.268990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.269018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.269029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.277556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.277584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.277595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.285892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.285920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.285932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.294209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.294237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.294249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.302602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.302630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.302642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.312333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.312360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.312372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.320106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.320133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.320144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.327765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.327792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.327803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.335337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.062 [2024-07-15 20:26:51.335364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.062 [2024-07-15 20:26:51.335375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.062 [2024-07-15 20:26:51.342732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.342759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.342770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.350182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.350208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.350219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.357923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.357951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.357962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.365446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.365472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.365488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.372717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.372745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.372756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.380113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.380140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.380152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.387757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.387784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.387795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.395473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.395500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.395511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.063 [2024-07-15 20:26:51.402915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.063 [2024-07-15 20:26:51.402941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.063 [2024-07-15 20:26:51.402952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.410356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.410384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.410396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.418045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.418072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.418083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.425648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.425675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.425687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.432973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.433001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.433012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.440324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.440350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.440362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.447816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.447844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.447855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.455528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.455556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.455570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.463225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.463261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.463273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.470418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.470446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.470458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.477696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.477723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.477735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.323 [2024-07-15 20:26:51.485164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.323 [2024-07-15 20:26:51.485192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.323 [2024-07-15 20:26:51.485204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.492963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.492990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.493006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.500787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.500815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.500826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.508420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.508447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.508458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.515928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.515956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.515967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.523642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.523668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.523680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.531073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.531112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.538415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.538442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.538452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.545939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.545966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.545979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.553630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.553660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.553672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.561956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.561988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.561999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.569759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.569786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.569798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.578682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.578711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.578722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.583666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.583692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.583704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.590735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.590764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.590776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.598445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.598472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.598485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.606653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.606681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.606693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.615139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.615167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.615179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.623088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.623117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.623129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.631286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.631315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.631327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.324 [2024-07-15 20:26:51.639349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.324 [2024-07-15 20:26:51.639377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.324 [2024-07-15 20:26:51.639388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.325 [2024-07-15 20:26:51.647483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.325 [2024-07-15 20:26:51.647512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.325 [2024-07-15 20:26:51.647524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.325 [2024-07-15 20:26:51.656836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.325 [2024-07-15 20:26:51.656864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.325 [2024-07-15 20:26:51.656876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.325 [2024-07-15 20:26:51.666325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.325 [2024-07-15 20:26:51.666354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.325 [2024-07-15 20:26:51.666365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.584 [2024-07-15 20:26:51.675566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.584 [2024-07-15 20:26:51.675595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.584 [2024-07-15 20:26:51.675607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.584 [2024-07-15 20:26:51.684796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.584 [2024-07-15 20:26:51.684825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.684836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.694413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.694442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.694455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.702882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.702911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.702927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.712813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.712844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.712856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.723021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.723050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.731881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.731911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.731923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.740689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.740732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.751352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.751381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.751393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.759731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.759761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.759773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.768634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.768662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.768673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.777130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.777158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.777169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.785646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.785677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.785688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.794048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.794075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.794086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.802293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.802320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.802332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.806732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.806759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.806770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.815291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.815318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.815330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.823239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.823273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.823285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.831413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.831439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.831451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.839394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.839422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.839433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.847249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.847283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.847295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.855293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.855321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.855333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.863341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.863368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.863380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.872007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.872035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.872048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.879837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.879865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.879876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.887484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.887512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.887524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.895164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.895192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.895203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.902729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.902757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.902768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.910456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.910485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.910496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.919225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.585 [2024-07-15 20:26:51.919251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.585 [2024-07-15 20:26:51.919274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.585 [2024-07-15 20:26:51.928101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.586 [2024-07-15 20:26:51.928129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.586 [2024-07-15 20:26:51.928141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.845 [2024-07-15 20:26:51.936175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.845 [2024-07-15 20:26:51.936203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-15 20:26:51.936215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.845 [2024-07-15 20:26:51.944369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.845 [2024-07-15 20:26:51.944395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-15 20:26:51.944406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.845 [2024-07-15 20:26:51.952863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.845 [2024-07-15 20:26:51.952890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.845 [2024-07-15 20:26:51.952901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.845 [2024-07-15 20:26:51.960852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:51.960878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:51.960889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:51.969133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:51.969159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:51.969171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:51.976956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:51.976983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:51.976993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:51.984822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:51.984849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:51.984860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:51.992745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:51.992776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:51.992787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.000398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.000426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.000437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.008251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.008286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.008298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.016153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.016193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.023990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.024017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.024028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.031668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.031695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.031706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.039695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.039721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.039732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.047474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.047502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.047513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.054874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.054901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.054916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.062502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.062528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.062539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.070337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.070364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.070375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.078350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.078377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.078388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.087036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.087064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.087075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.096088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.096115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.096126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.104656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.104683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.104694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.112837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.112865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.112876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.121994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.122021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.122031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.130817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.130864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.846 [2024-07-15 20:26:52.139965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.846 [2024-07-15 20:26:52.139993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.846 [2024-07-15 20:26:52.140005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.847 [2024-07-15 20:26:52.148449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.847 [2024-07-15 20:26:52.148475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-15 20:26:52.148486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-07-15 20:26:52.158021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.847 [2024-07-15 20:26:52.158049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-15 20:26:52.158061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.847 [2024-07-15 20:26:52.166931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.847 [2024-07-15 20:26:52.166958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-15 20:26:52.166969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.847 [2024-07-15 20:26:52.175597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.847 [2024-07-15 20:26:52.175632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-15 20:26:52.175644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.847 [2024-07-15 20:26:52.184772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.847 [2024-07-15 20:26:52.184800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-15 20:26:52.184812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.847 [2024-07-15 20:26:52.193283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:26.847 [2024-07-15 20:26:52.193312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.847 [2024-07-15 20:26:52.193324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.107 [2024-07-15 20:26:52.203550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.107 [2024-07-15 20:26:52.203580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.107 [2024-07-15 20:26:52.203592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.107 [2024-07-15 20:26:52.212731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.107 [2024-07-15 20:26:52.212759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.107 [2024-07-15 20:26:52.212770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.107 [2024-07-15 20:26:52.221812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.107 [2024-07-15 20:26:52.221839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.107 [2024-07-15 20:26:52.221851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.107 [2024-07-15 20:26:52.230872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.107 [2024-07-15 20:26:52.230900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.107 [2024-07-15 20:26:52.230911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.107 [2024-07-15 20:26:52.239331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.107 [2024-07-15 20:26:52.239357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-07-15 20:26:52.239369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.108 [2024-07-15 20:26:52.248182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.108 [2024-07-15 20:26:52.248210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-07-15 20:26:52.248222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.108 [2024-07-15 20:26:52.256903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.108 [2024-07-15 20:26:52.256931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-07-15 20:26:52.256942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.108 [2024-07-15 20:26:52.265111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.108 [2024-07-15 20:26:52.265138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-07-15 20:26:52.265149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.108 [2024-07-15 20:26:52.273672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fb7490) 00:28:27.108 [2024-07-15 20:26:52.273701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.108 [2024-07-15 20:26:52.273713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.108 00:28:27.108 Latency(us) 00:28:27.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.108 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:27.108 nvme0n1 : 2.00 3877.33 484.67 0.00 0.00 4121.92 1042.62 10545.34 00:28:27.108 =================================================================================================================== 00:28:27.108 Total : 3877.33 484.67 0.00 0.00 4121.92 1042.62 10545.34 00:28:27.108 0 00:28:27.108 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.108 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.108 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.108 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.108 | .driver_specific 00:28:27.108 | .nvme_error 00:28:27.108 | .status_code 00:28:27.108 | .command_transient_transport_error' 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 250 > 0 )) 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 203973 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 203973 ']' 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 203973 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203973 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203973' 00:28:27.367 killing process with pid 203973 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 203973 00:28:27.367 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.367 00:28:27.367 Latency(us) 00:28:27.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.367 =================================================================================================================== 00:28:27.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.367 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 203973 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=204516 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 204516 /var/tmp/bperf.sock 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 204516 ']' 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.626 20:26:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.626 [2024-07-15 20:26:52.855969] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:27.626 [2024-07-15 20:26:52.856030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204516 ] 00:28:27.626 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.626 [2024-07-15 20:26:52.928310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.884 [2024-07-15 20:26:53.009197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.884 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.884 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:27.884 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.884 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.142 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:28.142 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.142 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.142 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.142 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.142 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.709 nvme0n1 00:28:28.709 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:28.709 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.709 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.709 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.709 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:28.709 20:26:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.709 Running I/O for 2 seconds... 00:28:28.709 [2024-07-15 20:26:53.915374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190de038 00:28:28.709 [2024-07-15 20:26:53.916964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.917001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:53.929931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fe2e8 00:28:28.709 [2024-07-15 20:26:53.931704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.931733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:53.941092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190de8a8 00:28:28.709 [2024-07-15 20:26:53.942093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.942117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:53.956038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eaef0 00:28:28.709 [2024-07-15 20:26:53.957029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.957054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:53.971641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fef90 00:28:28.709 [2024-07-15 20:26:53.973365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.973390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:53.982795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df988 00:28:28.709 [2024-07-15 20:26:53.983746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.983770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:53.997651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ebfd0 00:28:28.709 [2024-07-15 20:26:53.998589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:53.998613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:54.012946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f9b30 00:28:28.709 [2024-07-15 20:26:54.014617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:54.014641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:54.024057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e0a68 00:28:28.709 [2024-07-15 20:26:54.024960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:54.024984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:54.038889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ed0b0 00:28:28.709 [2024-07-15 20:26:54.039785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:54.039809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:28.709 [2024-07-15 20:26:54.054221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f8a50 00:28:28.709 [2024-07-15 20:26:54.055848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.709 [2024-07-15 20:26:54.055876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.065308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e1b48 00:28:28.968 [2024-07-15 20:26:54.066167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.066191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.080143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ee190 00:28:28.968 [2024-07-15 20:26:54.080990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.081015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.095497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eea00 00:28:28.968 [2024-07-15 20:26:54.097071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.097096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.110331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f7970 00:28:28.968 [2024-07-15 20:26:54.111893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.111918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.125618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ee190 00:28:28.968 [2024-07-15 20:26:54.127907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.127931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.136720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f6890 00:28:28.968 [2024-07-15 20:26:54.138243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.138270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.151531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e12d8 00:28:28.968 [2024-07-15 20:26:54.153044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.153067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.166850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ed0b0 00:28:28.968 [2024-07-15 20:26:54.169089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.169112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.177923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f57b0 00:28:28.968 [2024-07-15 20:26:54.179412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.179436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.192744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e01f8 00:28:28.968 [2024-07-15 20:26:54.194210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.194234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.208083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ebfd0 00:28:28.968 [2024-07-15 20:26:54.210277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.210301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.219123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f46d0 00:28:28.968 [2024-07-15 20:26:54.220554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.968 [2024-07-15 20:26:54.220577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:28.968 [2024-07-15 20:26:54.233932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df118 00:28:28.969 [2024-07-15 20:26:54.235357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.969 [2024-07-15 20:26:54.235380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:28.969 [2024-07-15 20:26:54.249272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eaef0 00:28:28.969 [2024-07-15 20:26:54.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.969 [2024-07-15 20:26:54.251437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:28.969 [2024-07-15 20:26:54.260292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f35f0 00:28:28.969 [2024-07-15 20:26:54.261669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.969 [2024-07-15 20:26:54.261692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:28.969 [2024-07-15 20:26:54.275101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190de038 00:28:28.969 [2024-07-15 20:26:54.276485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.969 [2024-07-15 20:26:54.276509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:28.969 [2024-07-15 20:26:54.290469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e9e10 00:28:28.969 [2024-07-15 20:26:54.292569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.969 [2024-07-15 20:26:54.292592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:28.969 [2024-07-15 20:26:54.301495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e3060 00:28:28.969 [2024-07-15 20:26:54.302829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:28.969 [2024-07-15 20:26:54.302852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.969 [2024-07-15 20:26:54.316337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ef6a8 00:28:29.227 [2024-07-15 20:26:54.317664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.227 [2024-07-15 20:26:54.317689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.331649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e8d30 00:28:29.228 [2024-07-15 20:26:54.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.333727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.342723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e4140 00:28:29.228 [2024-07-15 20:26:54.344011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.344034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.357592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f0788 00:28:29.228 [2024-07-15 20:26:54.358869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.358893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.372889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e7c50 00:28:29.228 [2024-07-15 20:26:54.374902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.374925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.383936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e5220 00:28:29.228 [2024-07-15 20:26:54.385175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.385198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.398811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f1868 00:28:29.228 [2024-07-15 20:26:54.400039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.400062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.414103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6b70 00:28:29.228 [2024-07-15 20:26:54.416060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.416087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.425154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6300 00:28:29.228 [2024-07-15 20:26:54.426355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.426378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.440053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f2948 00:28:29.228 [2024-07-15 20:26:54.441242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.441270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.455315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e5a90 00:28:29.228 [2024-07-15 20:26:54.457226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.457250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.466403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e73e0 00:28:29.228 [2024-07-15 20:26:54.467552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.467576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.481271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fb8b8 00:28:29.228 [2024-07-15 20:26:54.482412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.482435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.494804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e49b0 00:28:29.228 [2024-07-15 20:26:54.495940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.495964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.508819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f0788 00:28:29.228 [2024-07-15 20:26:54.509935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.509959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.522421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f0788 00:28:29.228 [2024-07-15 20:26:54.523540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.523565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.537945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e84c0 00:28:29.228 [2024-07-15 20:26:54.539797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.539824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.549991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fc998 00:28:29.228 [2024-07-15 20:26:54.551082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.551106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.228 [2024-07-15 20:26:54.563739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e38d0 00:28:29.228 [2024-07-15 20:26:54.564832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.228 [2024-07-15 20:26:54.564856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.487 [2024-07-15 20:26:54.577733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ef6a8 00:28:29.487 [2024-07-15 20:26:54.578812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.487 [2024-07-15 20:26:54.578836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.487 [2024-07-15 20:26:54.590451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e95a0 00:28:29.487 [2024-07-15 20:26:54.591510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.487 [2024-07-15 20:26:54.591533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.487 [2024-07-15 20:26:54.606885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e95a0 00:28:29.487 [2024-07-15 20:26:54.608687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.487 [2024-07-15 20:26:54.608712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.487 [2024-07-15 20:26:54.618941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fda78 00:28:29.488 [2024-07-15 20:26:54.619984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.620009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.632693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f2d80 00:28:29.488 [2024-07-15 20:26:54.633729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.633753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.646636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190de038 00:28:29.488 [2024-07-15 20:26:54.647655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.647679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.659350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ea680 00:28:29.488 [2024-07-15 20:26:54.660367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.660391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.675791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ea680 00:28:29.488 [2024-07-15 20:26:54.677546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.677570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.687781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190feb58 00:28:29.488 [2024-07-15 20:26:54.688783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.688807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.701572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f3e60 00:28:29.488 [2024-07-15 20:26:54.702563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.702586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.715516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df118 00:28:29.488 [2024-07-15 20:26:54.716492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.716516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.728282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eb760 00:28:29.488 [2024-07-15 20:26:54.729241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.729274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.744779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eb760 00:28:29.488 [2024-07-15 20:26:54.746486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.746511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.756749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fa3a0 00:28:29.488 [2024-07-15 20:26:54.757704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.757728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.769667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f4f40 00:28:29.488 [2024-07-15 20:26:54.770607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.770632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.786240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f4f40 00:28:29.488 [2024-07-15 20:26:54.787922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.787946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.798208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e01f8 00:28:29.488 [2024-07-15 20:26:54.799144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.799167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.811160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ec840 00:28:29.488 [2024-07-15 20:26:54.812078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.812101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.488 [2024-07-15 20:26:54.824903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e0a68 00:28:29.488 [2024-07-15 20:26:54.825880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.488 [2024-07-15 20:26:54.825903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.746 [2024-07-15 20:26:54.839407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e88f8 00:28:29.746 [2024-07-15 20:26:54.840597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.746 [2024-07-15 20:26:54.840621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.746 [2024-07-15 20:26:54.853921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df550 00:28:29.746 [2024-07-15 20:26:54.855287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.746 [2024-07-15 20:26:54.855311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.746 [2024-07-15 20:26:54.868396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df118 00:28:29.746 [2024-07-15 20:26:54.869947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.746 [2024-07-15 20:26:54.869971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.746 [2024-07-15 20:26:54.882945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fac10 00:28:29.746 [2024-07-15 20:26:54.884691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.746 [2024-07-15 20:26:54.884715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.897443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e88f8 00:28:29.747 [2024-07-15 20:26:54.899421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.899449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.911975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f6cc8 00:28:29.747 [2024-07-15 20:26:54.914109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.914133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.924930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190feb58 00:28:29.747 [2024-07-15 20:26:54.926510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.926534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.937537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e5ec8 00:28:29.747 [2024-07-15 20:26:54.939473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.939497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.950332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e23b8 00:28:29.747 [2024-07-15 20:26:54.951334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.951358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.964896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f9f68 00:28:29.747 [2024-07-15 20:26:54.966115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.966140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.977992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ed0b0 00:28:29.747 [2024-07-15 20:26:54.979191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.979214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:54.992547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190de8a8 00:28:29.747 [2024-07-15 20:26:54.993909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:54.993933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.007956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f5be8 00:28:29.747 [2024-07-15 20:26:55.009563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.009587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.021747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eff18 00:28:29.747 [2024-07-15 20:26:55.023325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.023349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.036074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fda78 00:28:29.747 [2024-07-15 20:26:55.037819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.037843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.049920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ea248 00:28:29.747 [2024-07-15 20:26:55.051849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.051872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.064436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e01f8 00:28:29.747 [2024-07-15 20:26:55.066492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.066515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.078939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ee190 00:28:29.747 [2024-07-15 20:26:55.081233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.081260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.747 [2024-07-15 20:26:55.088728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e9e10 00:28:29.747 [2024-07-15 20:26:55.089780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.747 [2024-07-15 20:26:55.089804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.102754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e1b48 00:28:30.006 [2024-07-15 20:26:55.103800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.103825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.115680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ee5c8 00:28:30.006 [2024-07-15 20:26:55.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.116693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.130138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e7818 00:28:30.006 [2024-07-15 20:26:55.131340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.144658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6b70 00:28:30.006 [2024-07-15 20:26:55.146014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.146038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.159116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6738 00:28:30.006 [2024-07-15 20:26:55.160662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.160685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.173581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e4578 00:28:30.006 [2024-07-15 20:26:55.175314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.175337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.188115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e7818 00:28:30.006 [2024-07-15 20:26:55.190065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.190089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.202593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ec840 00:28:30.006 [2024-07-15 20:26:55.204703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.204727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.217108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e12d8 00:28:30.006 [2024-07-15 20:26:55.219430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.219453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.226896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f7100 00:28:30.006 [2024-07-15 20:26:55.227920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.227943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.239964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f1ca0 00:28:30.006 [2024-07-15 20:26:55.240954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.240978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.254480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e2c28 00:28:30.006 [2024-07-15 20:26:55.255672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.255699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.269898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e95a0 00:28:30.006 [2024-07-15 20:26:55.271277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.271300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.284144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f5378 00:28:30.006 [2024-07-15 20:26:55.285703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.285726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.297368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eff18 00:28:30.006 [2024-07-15 20:26:55.298912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.298935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.310212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f57b0 00:28:30.006 [2024-07-15 20:26:55.311206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.311230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.324270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df988 00:28:30.006 [2024-07-15 20:26:55.325002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.325025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.338818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ed920 00:28:30.006 [2024-07-15 20:26:55.339838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.339862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:30.006 [2024-07-15 20:26:55.353315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fa7d8 00:28:30.006 [2024-07-15 20:26:55.354416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.006 [2024-07-15 20:26:55.354441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.264 [2024-07-15 20:26:55.366418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e73e0 00:28:30.264 [2024-07-15 20:26:55.368338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-07-15 20:26:55.368362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:30.264 [2024-07-15 20:26:55.378328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ebfd0 00:28:30.264 [2024-07-15 20:26:55.379309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.264 [2024-07-15 20:26:55.379333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:30.264 [2024-07-15 20:26:55.393703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eb760 00:28:30.265 [2024-07-15 20:26:55.394894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.394918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.408023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190dece0 00:28:30.265 [2024-07-15 20:26:55.409398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.409422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.421106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6b70 00:28:30.265 [2024-07-15 20:26:55.422492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.422516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.436506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f0350 00:28:30.265 [2024-07-15 20:26:55.438093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.438116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.450803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e99d8 00:28:30.265 [2024-07-15 20:26:55.452572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.452596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.462109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190df550 00:28:30.265 [2024-07-15 20:26:55.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.463131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.476306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fd640 00:28:30.265 [2024-07-15 20:26:55.477113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.477137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.490806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fd208 00:28:30.265 [2024-07-15 20:26:55.491834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.491858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.505263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fb480 00:28:30.265 [2024-07-15 20:26:55.506474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.506498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.518356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e1f80 00:28:30.265 [2024-07-15 20:26:55.520296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.520320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.530210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eb328 00:28:30.265 [2024-07-15 20:26:55.531191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.531214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.544662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e2c28 00:28:30.265 [2024-07-15 20:26:55.545837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.545861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.559200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fd208 00:28:30.265 [2024-07-15 20:26:55.560612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.560635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.573679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fcdd0 00:28:30.265 [2024-07-15 20:26:55.575262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.575286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.588183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e5a90 00:28:30.265 [2024-07-15 20:26:55.589926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.589949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.602702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e2c28 00:28:30.265 [2024-07-15 20:26:55.604620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.265 [2024-07-15 20:26:55.604644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.265 [2024-07-15 20:26:55.613753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e4140 00:28:30.523 [2024-07-15 20:26:55.614910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.614943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.628638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f0350 00:28:30.523 [2024-07-15 20:26:55.629731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.629755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.641335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e9168 00:28:30.523 [2024-07-15 20:26:55.642403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.642426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.655821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ff3c8 00:28:30.523 [2024-07-15 20:26:55.657178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.657201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.670372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f0bc0 00:28:30.523 [2024-07-15 20:26:55.671885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.671909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.684837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e88f8 00:28:30.523 [2024-07-15 20:26:55.686567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.686590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.699342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e9168 00:28:30.523 [2024-07-15 20:26:55.701227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.701250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.713861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ed0b0 00:28:30.523 [2024-07-15 20:26:55.715939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.715962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.728315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fef90 00:28:30.523 [2024-07-15 20:26:55.730596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.730620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.738109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f1ca0 00:28:30.523 [2024-07-15 20:26:55.739097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.739124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.752839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190fd640 00:28:30.523 [2024-07-15 20:26:55.753984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.754009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.766623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f31b8 00:28:30.523 [2024-07-15 20:26:55.767952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.767976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.781174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190eaab8 00:28:30.523 [2024-07-15 20:26:55.782721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.523 [2024-07-15 20:26:55.782744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:30.523 [2024-07-15 20:26:55.795736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190ebfd0 00:28:30.524 [2024-07-15 20:26:55.797437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.524 [2024-07-15 20:26:55.797460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.524 [2024-07-15 20:26:55.810193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6738 00:28:30.524 [2024-07-15 20:26:55.812083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.524 [2024-07-15 20:26:55.812107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:30.524 [2024-07-15 20:26:55.824735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e6300 00:28:30.524 [2024-07-15 20:26:55.826756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.524 [2024-07-15 20:26:55.826781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.524 [2024-07-15 20:26:55.839207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f2d80 00:28:30.524 [2024-07-15 20:26:55.841421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.524 [2024-07-15 20:26:55.841444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:30.524 [2024-07-15 20:26:55.849004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f92c0 00:28:30.524 [2024-07-15 20:26:55.849900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.524 [2024-07-15 20:26:55.849924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:30.524 [2024-07-15 20:26:55.863537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190f1868 00:28:30.524 [2024-07-15 20:26:55.864617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.524 [2024-07-15 20:26:55.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:30.782 [2024-07-15 20:26:55.878047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e8d30 00:28:30.782 [2024-07-15 20:26:55.879323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.782 [2024-07-15 20:26:55.879347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:30.782 [2024-07-15 20:26:55.892565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9a4a40) with pdu=0x2000190e0a68 00:28:30.782 [2024-07-15 20:26:55.894013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.782 [2024-07-15 20:26:55.894036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:30.782 00:28:30.782 Latency(us) 00:28:30.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.782 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.782 nvme0n1 : 2.01 18402.07 71.88 0.00 0.00 6947.46 3485.32 17754.30 00:28:30.782 =================================================================================================================== 00:28:30.782 Total : 18402.07 71.88 0.00 0.00 6947.46 3485.32 17754.30 00:28:30.782 0 00:28:30.782 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:30.782 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:30.782 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:30.782 20:26:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:30.782 | .driver_specific 00:28:30.782 | .nvme_error 00:28:30.782 | .status_code 00:28:30.782 | .command_transient_transport_error' 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 204516 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 204516 ']' 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 204516 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 204516 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 204516' 00:28:31.040 killing process with pid 204516 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 204516 00:28:31.040 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.040 00:28:31.040 Latency(us) 00:28:31.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.040 =================================================================================================================== 00:28:31.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.040 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 204516 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=205255 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 205255 /var/tmp/bperf.sock 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 205255 ']' 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:31.299 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.299 [2024-07-15 20:26:56.447528] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:31.299 [2024-07-15 20:26:56.447587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205255 ] 00:28:31.299 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:31.299 Zero copy mechanism will not be used. 00:28:31.299 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.299 [2024-07-15 20:26:56.518229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.299 [2024-07-15 20:26:56.610227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.865 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:31.865 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:31.865 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:31.865 20:26:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:31.865 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:31.865 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.865 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.865 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.865 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.865 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.124 nvme0n1 00:28:32.383 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:32.383 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.383 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.383 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.383 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:32.383 20:26:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.383 Zero copy mechanism will not be used. 00:28:32.383 Running I/O for 2 seconds... 00:28:32.384 [2024-07-15 20:26:57.611717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.612173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.612210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.619320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.619768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.619797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.625977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.626412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.626440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.633062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.633529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.633555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.640962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.641401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.641427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.647417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.647863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.647889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.653916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.654364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.654394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.660183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.660615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.660642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.666571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.667017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.667043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.673955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.674432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.674457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.681833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.682301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.682327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.690209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.690658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.690683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.696691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.697120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.697145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.702929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.703384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.703409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.709407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.709847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.709872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.715787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.716238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.716270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.722133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.722577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.722602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.384 [2024-07-15 20:26:57.728818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.384 [2024-07-15 20:26:57.729294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.384 [2024-07-15 20:26:57.729320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.644 [2024-07-15 20:26:57.735918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.644 [2024-07-15 20:26:57.736381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.644 [2024-07-15 20:26:57.736415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.644 [2024-07-15 20:26:57.742241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.644 [2024-07-15 20:26:57.742704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.644 [2024-07-15 20:26:57.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.644 [2024-07-15 20:26:57.748471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.644 [2024-07-15 20:26:57.748903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.644 [2024-07-15 20:26:57.748929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.644 [2024-07-15 20:26:57.754823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.644 [2024-07-15 20:26:57.755264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.644 [2024-07-15 20:26:57.755289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.644 [2024-07-15 20:26:57.761244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.644 [2024-07-15 20:26:57.761706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.644 [2024-07-15 20:26:57.761731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.644 [2024-07-15 20:26:57.768778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.644 [2024-07-15 20:26:57.769227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.644 [2024-07-15 20:26:57.769252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.775948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.776391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.776416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.782319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.782747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.782772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.788723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.789155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.795188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.795617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.795643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.801875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.802324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.802348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.808585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.809020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.809044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.815107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.815553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.815578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.822139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.822593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.822618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.828850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.829308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.829337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.835459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.835906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.835931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.842041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.842481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.842506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.849069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.849527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.849552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.855578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.856018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.856043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.861912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.862353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.862378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.868612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.869056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.869081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.875801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.876247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.876280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.883355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.883800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.883824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.889728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.890199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.895994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.896428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.896453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.902250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.902708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.902732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.908754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.909207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.909232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.916148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.916603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.916628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.923793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.924239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.924271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.931139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.931588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.931613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.938031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.938480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.938505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.944694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.945135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.945160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.952038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.952480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.959913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.960357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.960383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.645 [2024-07-15 20:26:57.967922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.645 [2024-07-15 20:26:57.968377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.645 [2024-07-15 20:26:57.968403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.646 [2024-07-15 20:26:57.976012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.646 [2024-07-15 20:26:57.976216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.646 [2024-07-15 20:26:57.976241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.646 [2024-07-15 20:26:57.985889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.646 [2024-07-15 20:26:57.986345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.646 [2024-07-15 20:26:57.986370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:57.994115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:57.994573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:57.994598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.001458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.001910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.001934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.007954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.008405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.008430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.014880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.015318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.015347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.022917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.023396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.023421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.031447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.031918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.031942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.041418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.041696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.041721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.050152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.050632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.050658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.058906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.059336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.059361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.067210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.067715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.067740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.075781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.076221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.076246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.083094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.083517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.083542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.089551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.089952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.906 [2024-07-15 20:26:58.089976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.906 [2024-07-15 20:26:58.096672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.906 [2024-07-15 20:26:58.097064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.097088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.104414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.104814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.104838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.113003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.113480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.113504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.122586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.123062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.123086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.131900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.132390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.132415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.141201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.141620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.141644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.150236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.150703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.150727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.159699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.160185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.160210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.169205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.169658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.169682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.178240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.178720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.178745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.187498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.187947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.187971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.196294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.196766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.196790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.205232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.205707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.205732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.214106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.214593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.214618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.222415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.222804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.222829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.231331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.231754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.231778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.240289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.240694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.240723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.907 [2024-07-15 20:26:58.249324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:32.907 [2024-07-15 20:26:58.249756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.907 [2024-07-15 20:26:58.249780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.257759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.258201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.258226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.265237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.265669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.272510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.272930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.272954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.280595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.280990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.281013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.287890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.288286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.288311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.294474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.294866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.294890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.302034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.302436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.302460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.309109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.309518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.309542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.316538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.316936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.316960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.325021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.325419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.325443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.331741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.332130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.332154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.338122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.338519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.338543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.344286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.344678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.344702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.350314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.350689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.350714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.356650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.357049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.357074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.363289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.363688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.363717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.369751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.370164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.376178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.376575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.376600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.382430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.382821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.389895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.390286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.390310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.396746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.397135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.397160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.404011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.404475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.404499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.168 [2024-07-15 20:26:58.412065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.168 [2024-07-15 20:26:58.412502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.168 [2024-07-15 20:26:58.412527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.418711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.419109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.419134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.425536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.425936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.425962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.431722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.432109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.432134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.437949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.438346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.438371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.444586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.444981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.445005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.451497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.451896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.451920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.457528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.457924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.457949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.463566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.463968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.463993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.469696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.470073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.470097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.476014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.476409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.476434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.482216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.482616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.482641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.488341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.488725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.488749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.495642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.496017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.496041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.502143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.502548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.502572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.508686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.509082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.509107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.169 [2024-07-15 20:26:58.515031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.169 [2024-07-15 20:26:58.515415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.169 [2024-07-15 20:26:58.515440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.521109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.521500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.521526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.527129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.527525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.527550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.533044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.533448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.538965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.539361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.539385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.545112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.545505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.545530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.550955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.551354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.551378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.556956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.557360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.557385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.562962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.563365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.563390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.568786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.569174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.430 [2024-07-15 20:26:58.569199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.430 [2024-07-15 20:26:58.574834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.430 [2024-07-15 20:26:58.575215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.575240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.580990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.581394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.581419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.587955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.588356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.588381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.594782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.595173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.595197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.600829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.601222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.601246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.606753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.607126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.607152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.612728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.613120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.613144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.619287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.619684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.619709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.626131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.626527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.626552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.632111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.632505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.632529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.638006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.638397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.643899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.644296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.644319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.649835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.650227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.650251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.655753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.656130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.656155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.661987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.662384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.662408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.668916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.669368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.669391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.675138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.675534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.675559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.681166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.681557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.681581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.687491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.687871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.687896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.693382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.693779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.693812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.699942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.700359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.707119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.707517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.707541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.713094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.713491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.713516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.719121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.719509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.719534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.725069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.725465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.725489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.731160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.731561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.731585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.737028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.737416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.737441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.743665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.744056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.744080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.751305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.431 [2024-07-15 20:26:58.751737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.431 [2024-07-15 20:26:58.759217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.431 [2024-07-15 20:26:58.759678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.432 [2024-07-15 20:26:58.759703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.432 [2024-07-15 20:26:58.768470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.432 [2024-07-15 20:26:58.768928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.432 [2024-07-15 20:26:58.768952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.432 [2024-07-15 20:26:58.777027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.432 [2024-07-15 20:26:58.777505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.432 [2024-07-15 20:26:58.777530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.785754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.786143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.786167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.793897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.794362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.794387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.802243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.802729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.802753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.811106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.811562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.811586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.819844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.820369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.820394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.828711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.829205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.829230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.837895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.838455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.838480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.846920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.847423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.847447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.855200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.855582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.855607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.861367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.861771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.692 [2024-07-15 20:26:58.861796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.692 [2024-07-15 20:26:58.867728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.692 [2024-07-15 20:26:58.868121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.868146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.873999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.874393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.874418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.880023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.880420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.880458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.886550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.887003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.887032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.894207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.894667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.894691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.901687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.902180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.902205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.909394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.909828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.909853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.917437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.917927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.917951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.925535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.925926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.925950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.933315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.933829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.933853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.941264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.941706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.941731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.948896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.949349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.949374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.956831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.957315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.957340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.964761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.965262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.965287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.972691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.973092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.973117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.979658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.980052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.980076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.986477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.986877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.986902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:58.994577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:58.994969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:58.994994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.001833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.002217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.002242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.007980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.008362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.008385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.013860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.014252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.014283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.019773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.020162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.025694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.026082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.026106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.031592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.031985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.032010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.693 [2024-07-15 20:26:59.037547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.693 [2024-07-15 20:26:59.037935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.693 [2024-07-15 20:26:59.037959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.043504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.043885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.043909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.049287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.049680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.049704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.055104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.055499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.055524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.060923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.061312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.061337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.066742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.067125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.067153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.072581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.072981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.073005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.078477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.078865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.078890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.084548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.084998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.085022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.091973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.092355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.092380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.098436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.098834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.098858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.104543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.104938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.104961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.110532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.110908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.110933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.117900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.118387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.118411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.125618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.126034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.126058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.133130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.133563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.133587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.140798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.141211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.141235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.148935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.149442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.954 [2024-07-15 20:26:59.149466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.954 [2024-07-15 20:26:59.156932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.954 [2024-07-15 20:26:59.157373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.157398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.165059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.165497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.165521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.173311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.173761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.173785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.181312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.181807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.181831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.189207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.189711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.189735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.197192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.197640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.197664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.205107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.205600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.205624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.213109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.213553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.213577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.220140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.220512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.220537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.227271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.227642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.227666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.234480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.234976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.235001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.242527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.242904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.242929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.250051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.250544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.250568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.257832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.258244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.258280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.265413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.265837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.265861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.272842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.273283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.273308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.280358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.280768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.280792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.287797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.288293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.288318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.955 [2024-07-15 20:26:59.295224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:33.955 [2024-07-15 20:26:59.295645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.955 [2024-07-15 20:26:59.295670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.215 [2024-07-15 20:26:59.302682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.215 [2024-07-15 20:26:59.303096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.215 [2024-07-15 20:26:59.303120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.215 [2024-07-15 20:26:59.310216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.310664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.310689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.317798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.318277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.318301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.326619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.327000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.327024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.334931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.335312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.335336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.341774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.342152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.342176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.348722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.349085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.349110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.356275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.356650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.356674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.363229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.363633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.369964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.370346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.370371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.377534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.377980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.378004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.385124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.385523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.392701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.393188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.393213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.400856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.401194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.401218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.408161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.408584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.408609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.415692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.416176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.416200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.423889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.424321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.424345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.432184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.432633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.432658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.440306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.440767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.440792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.448874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.449361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.449386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.456872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.457293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.457323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.465338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.465779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.465804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.473615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.474021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.474045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.481776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.482208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.482233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.490032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.490434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.490458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.498398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.498863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.498887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.506786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.507229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.507260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.514812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.515266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.523714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.524120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.524144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.531604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.216 [2024-07-15 20:26:59.532004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.216 [2024-07-15 20:26:59.532029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.216 [2024-07-15 20:26:59.540013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.217 [2024-07-15 20:26:59.540430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.217 [2024-07-15 20:26:59.540455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.217 [2024-07-15 20:26:59.548195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.217 [2024-07-15 20:26:59.548654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.217 [2024-07-15 20:26:59.548679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.217 [2024-07-15 20:26:59.556470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.217 [2024-07-15 20:26:59.556879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.217 [2024-07-15 20:26:59.556904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.476 [2024-07-15 20:26:59.565187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.476 [2024-07-15 20:26:59.565625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.476 [2024-07-15 20:26:59.565650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.476 [2024-07-15 20:26:59.573550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.476 [2024-07-15 20:26:59.573966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.476 [2024-07-15 20:26:59.573991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.476 [2024-07-15 20:26:59.581605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.476 [2024-07-15 20:26:59.582019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.476 [2024-07-15 20:26:59.582044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:34.476 [2024-07-15 20:26:59.589762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.476 [2024-07-15 20:26:59.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.476 [2024-07-15 20:26:59.590276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:34.476 [2024-07-15 20:26:59.597651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7d9cd0) with pdu=0x2000190fef90 00:28:34.476 [2024-07-15 20:26:59.598039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.476 [2024-07-15 20:26:59.598064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:34.476 00:28:34.476 Latency(us) 00:28:34.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.476 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:34.476 nvme0n1 : 2.00 4270.29 533.79 0.00 0.00 3740.19 2785.28 13047.62 00:28:34.476 =================================================================================================================== 00:28:34.476 Total : 4270.29 533.79 0.00 0.00 3740.19 2785.28 13047.62 00:28:34.476 0 00:28:34.476 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:34.476 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:34.476 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:34.476 | .driver_specific 00:28:34.476 | .nvme_error 00:28:34.476 | .status_code 00:28:34.476 | .command_transient_transport_error' 00:28:34.476 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:34.735 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 275 > 0 )) 00:28:34.735 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 205255 00:28:34.735 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 205255 ']' 00:28:34.735 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 205255 00:28:34.735 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 205255 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 205255' 00:28:34.736 killing process with pid 205255 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 205255 00:28:34.736 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.736 00:28:34.736 Latency(us) 00:28:34.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.736 =================================================================================================================== 00:28:34.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.736 20:26:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 205255 00:28:34.994 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 203158 00:28:34.994 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 203158 ']' 00:28:34.994 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 203158 00:28:34.994 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:34.994 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:34.995 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203158 00:28:34.995 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:34.995 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:34.995 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203158' 00:28:34.995 killing process with pid 203158 00:28:34.995 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 203158 00:28:34.995 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 203158 00:28:35.253 00:28:35.253 real 0m15.670s 00:28:35.253 user 0m30.912s 00:28:35.253 sys 0m4.034s 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.253 ************************************ 00:28:35.253 END TEST nvmf_digest_error 00:28:35.253 ************************************ 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:35.253 rmmod nvme_tcp 00:28:35.253 rmmod nvme_fabrics 00:28:35.253 rmmod nvme_keyring 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 203158 ']' 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 203158 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 203158 ']' 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 203158 00:28:35.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (203158) - No such process 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 203158 is not found' 00:28:35.253 Process with pid 203158 is not found 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.253 20:27:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.790 20:27:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:37.790 00:28:37.790 real 0m40.059s 00:28:37.790 user 1m5.741s 00:28:37.790 sys 0m12.503s 00:28:37.790 20:27:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:37.790 20:27:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.790 ************************************ 00:28:37.790 END TEST nvmf_digest 00:28:37.790 ************************************ 00:28:37.790 20:27:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:37.790 20:27:02 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:37.790 20:27:02 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:37.790 20:27:02 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:37.790 20:27:02 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:37.790 20:27:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:37.790 20:27:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.790 20:27:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.790 ************************************ 00:28:37.790 START TEST nvmf_bdevperf 00:28:37.790 ************************************ 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:37.790 * Looking for test storage... 00:28:37.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.790 20:27:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.791 20:27:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.065 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.065 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.066 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.066 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:43.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:28:43.066 00:28:43.066 --- 10.0.0.2 ping statistics --- 00:28:43.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.066 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:28:43.066 00:28:43.066 --- 10.0.0.1 ping statistics --- 00:28:43.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.066 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=209529 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 209529 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 209529 ']' 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.066 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.066 [2024-07-15 20:27:08.411621] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:43.066 [2024-07-15 20:27:08.411678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.326 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.326 [2024-07-15 20:27:08.489090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.326 [2024-07-15 20:27:08.582170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.326 [2024-07-15 20:27:08.582211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.326 [2024-07-15 20:27:08.582222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.326 [2024-07-15 20:27:08.582231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.326 [2024-07-15 20:27:08.582238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.326 [2024-07-15 20:27:08.582306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.326 [2024-07-15 20:27:08.582397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.326 [2024-07-15 20:27:08.582400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.585 [2024-07-15 20:27:08.735543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.585 Malloc0 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:43.585 [2024-07-15 20:27:08.812224] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.585 { 00:28:43.585 "params": { 00:28:43.585 "name": "Nvme$subsystem", 00:28:43.585 "trtype": "$TEST_TRANSPORT", 00:28:43.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.585 "adrfam": "ipv4", 00:28:43.585 "trsvcid": "$NVMF_PORT", 00:28:43.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.585 "hdgst": ${hdgst:-false}, 00:28:43.585 "ddgst": ${ddgst:-false} 00:28:43.585 }, 00:28:43.585 "method": "bdev_nvme_attach_controller" 00:28:43.585 } 00:28:43.585 EOF 00:28:43.585 )") 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:43.585 20:27:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:43.585 "params": { 00:28:43.585 "name": "Nvme1", 00:28:43.585 "trtype": "tcp", 00:28:43.585 "traddr": "10.0.0.2", 00:28:43.585 "adrfam": "ipv4", 00:28:43.585 "trsvcid": "4420", 00:28:43.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.585 "hdgst": false, 00:28:43.585 "ddgst": false 00:28:43.585 }, 00:28:43.585 "method": "bdev_nvme_attach_controller" 00:28:43.585 }' 00:28:43.585 [2024-07-15 20:27:08.866227] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:43.585 [2024-07-15 20:27:08.866288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209870 ] 00:28:43.585 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.844 [2024-07-15 20:27:08.948096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.844 [2024-07-15 20:27:09.035886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.102 Running I/O for 1 seconds... 00:28:45.039 00:28:45.039 Latency(us) 00:28:45.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:45.039 Verification LBA range: start 0x0 length 0x4000 00:28:45.039 Nvme1n1 : 1.01 7360.44 28.75 0.00 0.00 17310.10 3813.00 20256.58 00:28:45.039 =================================================================================================================== 00:28:45.039 Total : 7360.44 28.75 0.00 0.00 17310.10 3813.00 20256.58 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=210358 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.298 { 00:28:45.298 "params": { 00:28:45.298 "name": "Nvme$subsystem", 00:28:45.298 "trtype": "$TEST_TRANSPORT", 00:28:45.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.298 "adrfam": "ipv4", 00:28:45.298 "trsvcid": "$NVMF_PORT", 00:28:45.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.298 "hdgst": ${hdgst:-false}, 00:28:45.298 "ddgst": ${ddgst:-false} 00:28:45.298 }, 00:28:45.298 "method": "bdev_nvme_attach_controller" 00:28:45.298 } 00:28:45.298 EOF 00:28:45.298 )") 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:45.298 20:27:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:45.298 "params": { 00:28:45.298 "name": "Nvme1", 00:28:45.298 "trtype": "tcp", 00:28:45.298 "traddr": "10.0.0.2", 00:28:45.298 "adrfam": "ipv4", 00:28:45.298 "trsvcid": "4420", 00:28:45.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.298 "hdgst": false, 00:28:45.298 "ddgst": false 00:28:45.298 }, 00:28:45.298 "method": "bdev_nvme_attach_controller" 00:28:45.298 }' 00:28:45.298 [2024-07-15 20:27:10.472272] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:45.298 [2024-07-15 20:27:10.472333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210358 ] 00:28:45.298 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.298 [2024-07-15 20:27:10.553199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.298 [2024-07-15 20:27:10.638995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.557 Running I/O for 15 seconds... 00:28:48.101 20:27:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 209529 00:28:48.101 20:27:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:48.101 [2024-07-15 20:27:13.439507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.439978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.439991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.101 [2024-07-15 20:27:13.440211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.101 [2024-07-15 20:27:13.440221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.440982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.440992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.102 [2024-07-15 20:27:13.441155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.102 [2024-07-15 20:27:13.441165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.103 [2024-07-15 20:27:13.441187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.103 [2024-07-15 20:27:13.441208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.441983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.441993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.442005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.442014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.442026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.103 [2024-07-15 20:27:13.442035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.442047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.103 [2024-07-15 20:27:13.442057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.442069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.103 [2024-07-15 20:27:13.442079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.103 [2024-07-15 20:27:13.442091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.104 [2024-07-15 20:27:13.442364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.104 [2024-07-15 20:27:13.442386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024200 is same with the state(5) to be set 00:28:48.104 [2024-07-15 20:27:13.442407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:48.104 [2024-07-15 20:27:13.442415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:48.104 [2024-07-15 20:27:13.442423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26296 len:8 PRP1 0x0 PRP2 0x0 00:28:48.104 [2024-07-15 20:27:13.442433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2024200 was disconnected and freed. reset controller. 00:28:48.104 [2024-07-15 20:27:13.442530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.104 [2024-07-15 20:27:13.442543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.104 [2024-07-15 20:27:13.442566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.104 [2024-07-15 20:27:13.442587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.104 [2024-07-15 20:27:13.442607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.104 [2024-07-15 20:27:13.442616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.104 [2024-07-15 20:27:13.446813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.104 [2024-07-15 20:27:13.446842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.104 [2024-07-15 20:27:13.447721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.104 [2024-07-15 20:27:13.447742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.104 [2024-07-15 20:27:13.447752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.104 [2024-07-15 20:27:13.448016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.104 [2024-07-15 20:27:13.448289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.104 [2024-07-15 20:27:13.448301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.104 [2024-07-15 20:27:13.448311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.364 [2024-07-15 20:27:13.452545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.364 [2024-07-15 20:27:13.461817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.364 [2024-07-15 20:27:13.462390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.364 [2024-07-15 20:27:13.462435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.364 [2024-07-15 20:27:13.462457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.364 [2024-07-15 20:27:13.463034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.364 [2024-07-15 20:27:13.463402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.364 [2024-07-15 20:27:13.463415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.364 [2024-07-15 20:27:13.463424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.364 [2024-07-15 20:27:13.467658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.364 [2024-07-15 20:27:13.476405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.364 [2024-07-15 20:27:13.476974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.364 [2024-07-15 20:27:13.477017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.364 [2024-07-15 20:27:13.477046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.364 [2024-07-15 20:27:13.477528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.364 [2024-07-15 20:27:13.477792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.364 [2024-07-15 20:27:13.477803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.364 [2024-07-15 20:27:13.477812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.364 [2024-07-15 20:27:13.482045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.364 [2024-07-15 20:27:13.491042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.364 [2024-07-15 20:27:13.491610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.364 [2024-07-15 20:27:13.491654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.364 [2024-07-15 20:27:13.491674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.364 [2024-07-15 20:27:13.492251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.364 [2024-07-15 20:27:13.492764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.364 [2024-07-15 20:27:13.492776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.364 [2024-07-15 20:27:13.492785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.364 [2024-07-15 20:27:13.497019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.364 [2024-07-15 20:27:13.505771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.364 [2024-07-15 20:27:13.506258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.364 [2024-07-15 20:27:13.506280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.364 [2024-07-15 20:27:13.506300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.364 [2024-07-15 20:27:13.506562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.364 [2024-07-15 20:27:13.506826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.364 [2024-07-15 20:27:13.506837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.364 [2024-07-15 20:27:13.506846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.364 [2024-07-15 20:27:13.511078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.364 [2024-07-15 20:27:13.520322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.364 [2024-07-15 20:27:13.520875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.520895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.520904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.521167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.521437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.521453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.521463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.525696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.534936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.535474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.535495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.535505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.535767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.536030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.536041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.536050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.540289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.549537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.550093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.550136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.550157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.550697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.550961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.550972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.550981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.555218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.564217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.564796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.564837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.564858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.565450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.565735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.565746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.565755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.569984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.578992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.579466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.579487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.579497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.579759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.580023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.580034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.580043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.584280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.593524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.594001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.594021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.594031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.594300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.594564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.594575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.594584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.598815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.608048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.608615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.608678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.609253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.609849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.609873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.609902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.614134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.622639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.623096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.623117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.623126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.623400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.623665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.623676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.623685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.627917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.637158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.637707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.637728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.637737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.637999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.638268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.638280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.638289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.642521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.651762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.652314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.652356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.652379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.652925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.653188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.653199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.653208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.659218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.667058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.667532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.365 [2024-07-15 20:27:13.667553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.365 [2024-07-15 20:27:13.667563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.365 [2024-07-15 20:27:13.667825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.365 [2024-07-15 20:27:13.668088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.365 [2024-07-15 20:27:13.668099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.365 [2024-07-15 20:27:13.668112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.365 [2024-07-15 20:27:13.672349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.365 [2024-07-15 20:27:13.681597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.365 [2024-07-15 20:27:13.682151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.366 [2024-07-15 20:27:13.682172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.366 [2024-07-15 20:27:13.682182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.366 [2024-07-15 20:27:13.682452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.366 [2024-07-15 20:27:13.682716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.366 [2024-07-15 20:27:13.682726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.366 [2024-07-15 20:27:13.682735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.366 [2024-07-15 20:27:13.686965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.366 [2024-07-15 20:27:13.696201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.366 [2024-07-15 20:27:13.696687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.366 [2024-07-15 20:27:13.696708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.366 [2024-07-15 20:27:13.696718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.366 [2024-07-15 20:27:13.696980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.366 [2024-07-15 20:27:13.697243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.366 [2024-07-15 20:27:13.697260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.366 [2024-07-15 20:27:13.697270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.366 [2024-07-15 20:27:13.701505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.366 [2024-07-15 20:27:13.710759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.366 [2024-07-15 20:27:13.711216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.366 [2024-07-15 20:27:13.711269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.366 [2024-07-15 20:27:13.711291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.366 [2024-07-15 20:27:13.711868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.366 [2024-07-15 20:27:13.712250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.366 [2024-07-15 20:27:13.712266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.366 [2024-07-15 20:27:13.712275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.716509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.725512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.726062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.726086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.726096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.726366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.726630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.726641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.726650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.731067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.740067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.740605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.740626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.740636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.740899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.741161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.741172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.741181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.745421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.754662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.755221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.755242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.755252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.755522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.755786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.755797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.755806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.760041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.769289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.769842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.769863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.769873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.770136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.770412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.770424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.770433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.774662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.783922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.784477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.784499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.784509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.784773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.785036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.785046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.785056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.789297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.798540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.799100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.799121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.799131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.799401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.799665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.799676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.799685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.803912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.813143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.813714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.813755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.813776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.814299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.814563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.814574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.814583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.818817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.827815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.828390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.828433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.828454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.829030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.829338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.829350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.829360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.833587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.842583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.843162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.843182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.843192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.843461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.843725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.843736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.843745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.847976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.857217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.857794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.857835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.857856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.858446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.858749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.858760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.858769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.863001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.871994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.872556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.872599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.872626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.873182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.873453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.873465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.873474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.877713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.886707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.887283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.887325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.887346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.887924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.888479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.888491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.888500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.892730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.901475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.902006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.902026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.902036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.902305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.902570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.902581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.902590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.906819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.916063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.916632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.916674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.916695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.630 [2024-07-15 20:27:13.917286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.630 [2024-07-15 20:27:13.917782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.630 [2024-07-15 20:27:13.917797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.630 [2024-07-15 20:27:13.917806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.630 [2024-07-15 20:27:13.922038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.630 [2024-07-15 20:27:13.930787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.630 [2024-07-15 20:27:13.931357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.630 [2024-07-15 20:27:13.931399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.630 [2024-07-15 20:27:13.931421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.631 [2024-07-15 20:27:13.931997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.631 [2024-07-15 20:27:13.932581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-07-15 20:27:13.932593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-07-15 20:27:13.932602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.631 [2024-07-15 20:27:13.936840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.631 [2024-07-15 20:27:13.945362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.631 [2024-07-15 20:27:13.945831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.631 [2024-07-15 20:27:13.945853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.631 [2024-07-15 20:27:13.945863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.631 [2024-07-15 20:27:13.946125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.631 [2024-07-15 20:27:13.946397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-07-15 20:27:13.946409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-07-15 20:27:13.946418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.631 [2024-07-15 20:27:13.950663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.631 [2024-07-15 20:27:13.959926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.631 [2024-07-15 20:27:13.960472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.631 [2024-07-15 20:27:13.960514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.631 [2024-07-15 20:27:13.960536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.631 [2024-07-15 20:27:13.961112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.631 [2024-07-15 20:27:13.961559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-07-15 20:27:13.961571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-07-15 20:27:13.961581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.631 [2024-07-15 20:27:13.966068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.631 [2024-07-15 20:27:13.974591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.631 [2024-07-15 20:27:13.975191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.631 [2024-07-15 20:27:13.975234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.631 [2024-07-15 20:27:13.975271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.631 [2024-07-15 20:27:13.975783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.631 [2024-07-15 20:27:13.976047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.631 [2024-07-15 20:27:13.976058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.631 [2024-07-15 20:27:13.976067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.891 [2024-07-15 20:27:13.980323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.891 [2024-07-15 20:27:13.989332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.891 [2024-07-15 20:27:13.989862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.891 [2024-07-15 20:27:13.989916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.891 [2024-07-15 20:27:13.989937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.891 [2024-07-15 20:27:13.990533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.891 [2024-07-15 20:27:13.990827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.891 [2024-07-15 20:27:13.990838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.891 [2024-07-15 20:27:13.990847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.891 [2024-07-15 20:27:13.995080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.004076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.004613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.004635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.004645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.004907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.005171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.005182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.005191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.009437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.018692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.019243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.019299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.019335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.019882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.020146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.020157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.020166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.024402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.033400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.033968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.034010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.034031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.034624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.034911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.034922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.034931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.039160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.048153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.048617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.048637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.048647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.048909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.049172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.049183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.049192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.053430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.062925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.063461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.063483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.063493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.063757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.064019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.064034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.064043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.068281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.077540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.078096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.078117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.078127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.078397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.078662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.078673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.078682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.082911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.092157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.092688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.092709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.092719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.092982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.093245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.093264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.093274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.097505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.106749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.107309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.107351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.107372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.107948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.108293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.108308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.108317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.112550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.121293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.121842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.121883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.121904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.122496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.123087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.123098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.123107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.127346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.135848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.136422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.136464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.892 [2024-07-15 20:27:14.136485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.892 [2024-07-15 20:27:14.137011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.892 [2024-07-15 20:27:14.137284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.892 [2024-07-15 20:27:14.137307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.892 [2024-07-15 20:27:14.137316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.892 [2024-07-15 20:27:14.141551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.892 [2024-07-15 20:27:14.150547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.892 [2024-07-15 20:27:14.151058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.892 [2024-07-15 20:27:14.151078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.151088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.151356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.151621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.151632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.151641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.893 [2024-07-15 20:27:14.155871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.893 [2024-07-15 20:27:14.165112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.893 [2024-07-15 20:27:14.165649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.893 [2024-07-15 20:27:14.165671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.165680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.165948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.166211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.166222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.166231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.893 [2024-07-15 20:27:14.170466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.893 [2024-07-15 20:27:14.179726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.893 [2024-07-15 20:27:14.180297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.893 [2024-07-15 20:27:14.180338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.180359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.180854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.181216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.181231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.181252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.893 [2024-07-15 20:27:14.187094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.893 [2024-07-15 20:27:14.194565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.893 [2024-07-15 20:27:14.195108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.893 [2024-07-15 20:27:14.195150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.195171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.195763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.196240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.196251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.196265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.893 [2024-07-15 20:27:14.200501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.893 [2024-07-15 20:27:14.209253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.893 [2024-07-15 20:27:14.209810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.893 [2024-07-15 20:27:14.209830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.209840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.210103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.210373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.210385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.210398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.893 [2024-07-15 20:27:14.214630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.893 [2024-07-15 20:27:14.223882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.893 [2024-07-15 20:27:14.224428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.893 [2024-07-15 20:27:14.224470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.224491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.225067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.225399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.225411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.225420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.893 [2024-07-15 20:27:14.229651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.893 [2024-07-15 20:27:14.238657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.893 [2024-07-15 20:27:14.239164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.893 [2024-07-15 20:27:14.239185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:48.893 [2024-07-15 20:27:14.239195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:48.893 [2024-07-15 20:27:14.239466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:48.893 [2024-07-15 20:27:14.239730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.893 [2024-07-15 20:27:14.239741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.893 [2024-07-15 20:27:14.239750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.153 [2024-07-15 20:27:14.243993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.153 [2024-07-15 20:27:14.253262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.153 [2024-07-15 20:27:14.253733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.153 [2024-07-15 20:27:14.253754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.153 [2024-07-15 20:27:14.253764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.153 [2024-07-15 20:27:14.254027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.153 [2024-07-15 20:27:14.254300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.153 [2024-07-15 20:27:14.254312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.153 [2024-07-15 20:27:14.254321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.153 [2024-07-15 20:27:14.258568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.153 [2024-07-15 20:27:14.267839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.153 [2024-07-15 20:27:14.268402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.153 [2024-07-15 20:27:14.268428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.153 [2024-07-15 20:27:14.268438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.153 [2024-07-15 20:27:14.268702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.153 [2024-07-15 20:27:14.268966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.153 [2024-07-15 20:27:14.268977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.153 [2024-07-15 20:27:14.268986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.153 [2024-07-15 20:27:14.273224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.153 [2024-07-15 20:27:14.282509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.153 [2024-07-15 20:27:14.283078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.153 [2024-07-15 20:27:14.283120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.153 [2024-07-15 20:27:14.283140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.153 [2024-07-15 20:27:14.283681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.153 [2024-07-15 20:27:14.283946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.153 [2024-07-15 20:27:14.283957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.153 [2024-07-15 20:27:14.283967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.153 [2024-07-15 20:27:14.288206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.153 [2024-07-15 20:27:14.297214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.153 [2024-07-15 20:27:14.297787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.153 [2024-07-15 20:27:14.297828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.153 [2024-07-15 20:27:14.297850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.153 [2024-07-15 20:27:14.298391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.153 [2024-07-15 20:27:14.298656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.153 [2024-07-15 20:27:14.298667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.153 [2024-07-15 20:27:14.298676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.153 [2024-07-15 20:27:14.302906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.153 [2024-07-15 20:27:14.311905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.153 [2024-07-15 20:27:14.312458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.312479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.312489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.312752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.313020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.313031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.313040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.317305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.326567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.327116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.327137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.327148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.327420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.327686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.327697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.327706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.331951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.341218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.341726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.341747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.341757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.342019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.342290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.342302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.342311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.346555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.355824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.356361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.356382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.356392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.356655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.356918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.356929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.356938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.361187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.370466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.370926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.370947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.370957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.371220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.371492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.371504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.371514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.375756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.385036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.385602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.385623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.385633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.385896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.386160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.386171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.386181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.390435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.399715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.400265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.400287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.400297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.400561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.400825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.400836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.400845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.405088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.414356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.414975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.415016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.415045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.415577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.415842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.415853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.415861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.420097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.429101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.429514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.429535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.429544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.429807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.430070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.430081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.430090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.434338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.443855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.444335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.444356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.444366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.444628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.444891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.444903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.444912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.449155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.458422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.458897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.154 [2024-07-15 20:27:14.458918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.154 [2024-07-15 20:27:14.458928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.154 [2024-07-15 20:27:14.459190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.154 [2024-07-15 20:27:14.459462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.154 [2024-07-15 20:27:14.459477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.154 [2024-07-15 20:27:14.459486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.154 [2024-07-15 20:27:14.463823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.154 [2024-07-15 20:27:14.473093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.154 [2024-07-15 20:27:14.473554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.155 [2024-07-15 20:27:14.473576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.155 [2024-07-15 20:27:14.473586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.155 [2024-07-15 20:27:14.473849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.155 [2024-07-15 20:27:14.474113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.155 [2024-07-15 20:27:14.474123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.155 [2024-07-15 20:27:14.474133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.155 [2024-07-15 20:27:14.478391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.155 [2024-07-15 20:27:14.487655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.155 [2024-07-15 20:27:14.488216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.155 [2024-07-15 20:27:14.488237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.155 [2024-07-15 20:27:14.488247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.155 [2024-07-15 20:27:14.488518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.155 [2024-07-15 20:27:14.488782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.155 [2024-07-15 20:27:14.488793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.155 [2024-07-15 20:27:14.488802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.155 [2024-07-15 20:27:14.493037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.502302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.502707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.502728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.502738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.503000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.503272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.503284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.503293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.507527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.517045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.517591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.517633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.517654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.518105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.518377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.518389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.518398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.522636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.531648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.532206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.532248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.532280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.532858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.533192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.533203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.533212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.537454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.546211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.546693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.546715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.546724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.546987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.547250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.547269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.547279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.551517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.560781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.561350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.561393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.561415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.561968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.562232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.562243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.562252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.566505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.575517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.576025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.576067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.576088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.576676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.577212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.577224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.577233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.581488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.590243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.590739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.590780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.590801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.591382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.591646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.591658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.591667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.595904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.604917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.605406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.605427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.605437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.605700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.605964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.605974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.605989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.610234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.619493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.620054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.620075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.620085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.620355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.620618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.429 [2024-07-15 20:27:14.620629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.429 [2024-07-15 20:27:14.620638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.429 [2024-07-15 20:27:14.624877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.429 [2024-07-15 20:27:14.634135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.429 [2024-07-15 20:27:14.634595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.429 [2024-07-15 20:27:14.634617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.429 [2024-07-15 20:27:14.634626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.429 [2024-07-15 20:27:14.634888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.429 [2024-07-15 20:27:14.635152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.635163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.635172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.639422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.648672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.649212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.649264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.649286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.649865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.650445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.650457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.650466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.654704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.663214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.663762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.663783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.663793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.664055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.664327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.664338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.664348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.668584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.677854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.678408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.678430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.678440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.678703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.678966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.678978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.678987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.683225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.692490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.693113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.693155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.693176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.693682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.693946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.693959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.693968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.698204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.707217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.707803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.707824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.707834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.708100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.708370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.708381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.708391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.712621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.721887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.722416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.722437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.722447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.722709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.722973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.722984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.722992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.727233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.736495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.737026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.737078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.737099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.737666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.737931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.737942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.737951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.742192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.430 [2024-07-15 20:27:14.751211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.430 [2024-07-15 20:27:14.751724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.430 [2024-07-15 20:27:14.751745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.430 [2024-07-15 20:27:14.751755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.430 [2024-07-15 20:27:14.752017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.430 [2024-07-15 20:27:14.752288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.430 [2024-07-15 20:27:14.752300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.430 [2024-07-15 20:27:14.752317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.430 [2024-07-15 20:27:14.756761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.749 [2024-07-15 20:27:14.765775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.749 [2024-07-15 20:27:14.766361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.749 [2024-07-15 20:27:14.766383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.749 [2024-07-15 20:27:14.766393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.749 [2024-07-15 20:27:14.766655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.749 [2024-07-15 20:27:14.766918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.749 [2024-07-15 20:27:14.766929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.749 [2024-07-15 20:27:14.766938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.749 [2024-07-15 20:27:14.771177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.749 [2024-07-15 20:27:14.780456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.749 [2024-07-15 20:27:14.780918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.749 [2024-07-15 20:27:14.780939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.749 [2024-07-15 20:27:14.780949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.749 [2024-07-15 20:27:14.781212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.749 [2024-07-15 20:27:14.781486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.749 [2024-07-15 20:27:14.781498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.749 [2024-07-15 20:27:14.781507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.749 [2024-07-15 20:27:14.785745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.749 [2024-07-15 20:27:14.794993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.749 [2024-07-15 20:27:14.795558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.749 [2024-07-15 20:27:14.795579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.749 [2024-07-15 20:27:14.795589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.749 [2024-07-15 20:27:14.795852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.749 [2024-07-15 20:27:14.796116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.749 [2024-07-15 20:27:14.796127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.749 [2024-07-15 20:27:14.796136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.749 [2024-07-15 20:27:14.800378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.749 [2024-07-15 20:27:14.809630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.749 [2024-07-15 20:27:14.810186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.749 [2024-07-15 20:27:14.810211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.749 [2024-07-15 20:27:14.810221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.749 [2024-07-15 20:27:14.810491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.749 [2024-07-15 20:27:14.810755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.749 [2024-07-15 20:27:14.810766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.749 [2024-07-15 20:27:14.810775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.749 [2024-07-15 20:27:14.815007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.749 [2024-07-15 20:27:14.824271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.824806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.824826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.824836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.825098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.825369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.825381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.825390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.829628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.838881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.839437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.839459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.839468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.839731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.839994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.840005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.840014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.844251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.853507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.854021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.854061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.854083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.854686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.854953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.854964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.854973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.859206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.868204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.868760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.868802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.868822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.869413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.869922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.869934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.869943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.874174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.882955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.883519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.883541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.883551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.883814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.884078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.884088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.884098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.888334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.897574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.898036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.898056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.898065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.898336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.898601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.898612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.898621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.902857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.912100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.912633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.912682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.912704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.913294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.913874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.913898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.913918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.918173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.926668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.927124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.927144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.927154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.927425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.927689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.927700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.927709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.931940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.941434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.941968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.941989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.941998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.942269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.942534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.942545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.942554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.946794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.956054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.956543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.956586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.956614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.957163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.957436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.750 [2024-07-15 20:27:14.957448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.750 [2024-07-15 20:27:14.957457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.750 [2024-07-15 20:27:14.961702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.750 [2024-07-15 20:27:14.970714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.750 [2024-07-15 20:27:14.971281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.750 [2024-07-15 20:27:14.971304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.750 [2024-07-15 20:27:14.971314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.750 [2024-07-15 20:27:14.971578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.750 [2024-07-15 20:27:14.971842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:14.971853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:14.971862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:14.976111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.751 [2024-07-15 20:27:14.985403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.751 [2024-07-15 20:27:14.985852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.751 [2024-07-15 20:27:14.985874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.751 [2024-07-15 20:27:14.985884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.751 [2024-07-15 20:27:14.986147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.751 [2024-07-15 20:27:14.986419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:14.986431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:14.986441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:14.990677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.751 [2024-07-15 20:27:14.999938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.751 [2024-07-15 20:27:15.000554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.751 [2024-07-15 20:27:15.000576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.751 [2024-07-15 20:27:15.000586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.751 [2024-07-15 20:27:15.000850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.751 [2024-07-15 20:27:15.001113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:15.001128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:15.001138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:15.005379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.751 [2024-07-15 20:27:15.014642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.751 [2024-07-15 20:27:15.015174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.751 [2024-07-15 20:27:15.015224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.751 [2024-07-15 20:27:15.015245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.751 [2024-07-15 20:27:15.015837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.751 [2024-07-15 20:27:15.016206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:15.016218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:15.016227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:15.020470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.751 [2024-07-15 20:27:15.029216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.751 [2024-07-15 20:27:15.029785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.751 [2024-07-15 20:27:15.029826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.751 [2024-07-15 20:27:15.029848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.751 [2024-07-15 20:27:15.030351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.751 [2024-07-15 20:27:15.030615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:15.030626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:15.030635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:15.034870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.751 [2024-07-15 20:27:15.043869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.751 [2024-07-15 20:27:15.044432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.751 [2024-07-15 20:27:15.044473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.751 [2024-07-15 20:27:15.044495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.751 [2024-07-15 20:27:15.045071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.751 [2024-07-15 20:27:15.045508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:15.045520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:15.045530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:15.049765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.751 [2024-07-15 20:27:15.058521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.751 [2024-07-15 20:27:15.059079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.751 [2024-07-15 20:27:15.059122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:49.751 [2024-07-15 20:27:15.059144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:49.751 [2024-07-15 20:27:15.059734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:49.751 [2024-07-15 20:27:15.060323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.751 [2024-07-15 20:27:15.060348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.751 [2024-07-15 20:27:15.060367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.751 [2024-07-15 20:27:15.064670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.073180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.073681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.073702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.073712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.073975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.074239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.074250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.074266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.078514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.087769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.088331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.088373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.088393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.088972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.089321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.089333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.089342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.093576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.102321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.102791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.102812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.102822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.103089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.103362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.103374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.103383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.107614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.116872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.117404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.117425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.117435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.117698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.117962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.117973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.117982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.122220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.131473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.132029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.132050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.132060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.132329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.132594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.132604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.132614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.136848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.146087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.146653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.146694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.146715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.147306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.147748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.147760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.147773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.152003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.160745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.161303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.161323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.161333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.161597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.161860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.161871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.161881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.166115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.175366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.175915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.175936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.175946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.176209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.176479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.044 [2024-07-15 20:27:15.176491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.044 [2024-07-15 20:27:15.176500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.044 [2024-07-15 20:27:15.180748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.044 [2024-07-15 20:27:15.189997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.044 [2024-07-15 20:27:15.190558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.044 [2024-07-15 20:27:15.190599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.044 [2024-07-15 20:27:15.190622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.044 [2024-07-15 20:27:15.191197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.044 [2024-07-15 20:27:15.191779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.191791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.191800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.196027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.204528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.205088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.205109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.205118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.205388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.205653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.205664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.205673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.209904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.219156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.219731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.219773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.219794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.220385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.220732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.220743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.220752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.224981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.233733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.234294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.234336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.234358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.234891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.235155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.235165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.235175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.239415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.248412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.248981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.249023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.249044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.249635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.249930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.249942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.249951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.254182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.263178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.263736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.263758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.263768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.264030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.264300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.264312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.264321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.268551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.277802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.278354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.278374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.278384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.278646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.278909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.278920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.278929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.283169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.292419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.292986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.293027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.293048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.293639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.293933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.293944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.293953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.298195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.307183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.307744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.307765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.307775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.308037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.308307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.308319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.308328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.312562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.321811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.322364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.322385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.322395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.322658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.322921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.322932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.322941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.327177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.336439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.337010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.337051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.045 [2024-07-15 20:27:15.337072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.045 [2024-07-15 20:27:15.337668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.045 [2024-07-15 20:27:15.337933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.045 [2024-07-15 20:27:15.337944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.045 [2024-07-15 20:27:15.337953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.045 [2024-07-15 20:27:15.342187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.045 [2024-07-15 20:27:15.351185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.045 [2024-07-15 20:27:15.351746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.045 [2024-07-15 20:27:15.351770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.046 [2024-07-15 20:27:15.351780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.046 [2024-07-15 20:27:15.352044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.046 [2024-07-15 20:27:15.352312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.046 [2024-07-15 20:27:15.352323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.046 [2024-07-15 20:27:15.352333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.046 [2024-07-15 20:27:15.356567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.046 [2024-07-15 20:27:15.365823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.046 [2024-07-15 20:27:15.366378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.046 [2024-07-15 20:27:15.366399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.046 [2024-07-15 20:27:15.366410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.046 [2024-07-15 20:27:15.366672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.046 [2024-07-15 20:27:15.366935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.046 [2024-07-15 20:27:15.366946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.046 [2024-07-15 20:27:15.366955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.046 [2024-07-15 20:27:15.371189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.046 [2024-07-15 20:27:15.380453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.046 [2024-07-15 20:27:15.381013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.046 [2024-07-15 20:27:15.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.046 [2024-07-15 20:27:15.381075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.046 [2024-07-15 20:27:15.381664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.046 [2024-07-15 20:27:15.382232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.046 [2024-07-15 20:27:15.382243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.046 [2024-07-15 20:27:15.382252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.046 [2024-07-15 20:27:15.386495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.395012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.395569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.395590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.395600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.395864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.396132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.396143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.396152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.400398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.409643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.410207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.410247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.410283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.410860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.411315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.411327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.411336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.415569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.424322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.424811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.424832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.424842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.425105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.425374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.425386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.425395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.429631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.438877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.439446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.439488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.439510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.440088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.440386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.440397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.440407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.444637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.453637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.454205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.454246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.454281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.454861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.455407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.455419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.455428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.459657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.468393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.468954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.468995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.469016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.469463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.469727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.469738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.469747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.473978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.482986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.483548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.483569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.483579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.483841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.484104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.484115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.484124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.488447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.497702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.498270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.498314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.498348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.498926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.306 [2024-07-15 20:27:15.499318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.306 [2024-07-15 20:27:15.499335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.306 [2024-07-15 20:27:15.499348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.306 [2024-07-15 20:27:15.505573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.306 [2024-07-15 20:27:15.512694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.306 [2024-07-15 20:27:15.513269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.306 [2024-07-15 20:27:15.513310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.306 [2024-07-15 20:27:15.513331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.306 [2024-07-15 20:27:15.513909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.514440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.514452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.514461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.518690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.527437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.527986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.528028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.528049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.528561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.528825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.528836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.528845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.533073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.542069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.542547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.542568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.542577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.542840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.543103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.543117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.543127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.547365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.556610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.557092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.557133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.557154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.557719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.557983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.557994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.558003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.562234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.571233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.571789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.571810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.571820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.572082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.572352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.572363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.572373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.576604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.585857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.586422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.586464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.586485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.586922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.587186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.587197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.587206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.591447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.600452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.601013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.601033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.601043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.601312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.601577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.601588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.601597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.605827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.615072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.615633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.615654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.615664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.615926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.616190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.616200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.616210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.620447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.629692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.630270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.630312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.630333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.630886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.631149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.631160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.631169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.635403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.307 [2024-07-15 20:27:15.644390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.307 [2024-07-15 20:27:15.644945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.307 [2024-07-15 20:27:15.644985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.307 [2024-07-15 20:27:15.645006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.307 [2024-07-15 20:27:15.645598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.307 [2024-07-15 20:27:15.645863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.307 [2024-07-15 20:27:15.645874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.307 [2024-07-15 20:27:15.645883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.307 [2024-07-15 20:27:15.650120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.568 [2024-07-15 20:27:15.659122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.568 [2024-07-15 20:27:15.659697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 20:27:15.659718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.568 [2024-07-15 20:27:15.659728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.568 [2024-07-15 20:27:15.659990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.568 [2024-07-15 20:27:15.660260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.568 [2024-07-15 20:27:15.660272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.568 [2024-07-15 20:27:15.660281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.568 [2024-07-15 20:27:15.664521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.568 [2024-07-15 20:27:15.673765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.568 [2024-07-15 20:27:15.674320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 20:27:15.674341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.568 [2024-07-15 20:27:15.674351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.568 [2024-07-15 20:27:15.674614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.568 [2024-07-15 20:27:15.674877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.568 [2024-07-15 20:27:15.674888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.568 [2024-07-15 20:27:15.674897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.568 [2024-07-15 20:27:15.679144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.568 [2024-07-15 20:27:15.688391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.568 [2024-07-15 20:27:15.688943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 20:27:15.688964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.568 [2024-07-15 20:27:15.688974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.568 [2024-07-15 20:27:15.689236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.568 [2024-07-15 20:27:15.689506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.568 [2024-07-15 20:27:15.689518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.568 [2024-07-15 20:27:15.689531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.568 [2024-07-15 20:27:15.693762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.568 [2024-07-15 20:27:15.703008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.568 [2024-07-15 20:27:15.703566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 20:27:15.703586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.568 [2024-07-15 20:27:15.703596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.568 [2024-07-15 20:27:15.703859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.704122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.704133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.704142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.708381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.717683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.718241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.718268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.718278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.718550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.718814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.718824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.718834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.723069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.732316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.732877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.732897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.732908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.733170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.733441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.733453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.733462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.737699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.746950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.747431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.747452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.747462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.747725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.747989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.748000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.748009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.752245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.761496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.762052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.762073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.762083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.762354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.762619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.762629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.762639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.766866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.776112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.776643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.776664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.776674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.776936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.777199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.777210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.777220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.781466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.790709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.791196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.791237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.791271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.791849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.792298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.792316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.792329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.798544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.805750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.806307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.806348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.806370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.806945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.807307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.807319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.807328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.811559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.820299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.820785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.820826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.820848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.821403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.821667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.821678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.821687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.825920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.834913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.835447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.835467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.835478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.835740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.569 [2024-07-15 20:27:15.836004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.569 [2024-07-15 20:27:15.836014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.569 [2024-07-15 20:27:15.836023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.569 [2024-07-15 20:27:15.840271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.569 [2024-07-15 20:27:15.849519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.569 [2024-07-15 20:27:15.850049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 20:27:15.850069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.569 [2024-07-15 20:27:15.850079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.569 [2024-07-15 20:27:15.850349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.570 [2024-07-15 20:27:15.850613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.570 [2024-07-15 20:27:15.850624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.570 [2024-07-15 20:27:15.850633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.570 [2024-07-15 20:27:15.854866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.570 [2024-07-15 20:27:15.864108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.570 [2024-07-15 20:27:15.864615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 20:27:15.864635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.570 [2024-07-15 20:27:15.864645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.570 [2024-07-15 20:27:15.864908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.570 [2024-07-15 20:27:15.865171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.570 [2024-07-15 20:27:15.865182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.570 [2024-07-15 20:27:15.865191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.570 [2024-07-15 20:27:15.869435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.570 [2024-07-15 20:27:15.878696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.570 [2024-07-15 20:27:15.879182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 20:27:15.879202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.570 [2024-07-15 20:27:15.879212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.570 [2024-07-15 20:27:15.879482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.570 [2024-07-15 20:27:15.879747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.570 [2024-07-15 20:27:15.879758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.570 [2024-07-15 20:27:15.879767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.570 [2024-07-15 20:27:15.884005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.570 [2024-07-15 20:27:15.893267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.570 [2024-07-15 20:27:15.893740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 20:27:15.893761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.570 [2024-07-15 20:27:15.893775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.570 [2024-07-15 20:27:15.894039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.570 [2024-07-15 20:27:15.894309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.570 [2024-07-15 20:27:15.894321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.570 [2024-07-15 20:27:15.894330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.570 [2024-07-15 20:27:15.898564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.570 [2024-07-15 20:27:15.907853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.570 [2024-07-15 20:27:15.908407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 20:27:15.908428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.570 [2024-07-15 20:27:15.908438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.570 [2024-07-15 20:27:15.908702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.570 [2024-07-15 20:27:15.908964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.570 [2024-07-15 20:27:15.908975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.570 [2024-07-15 20:27:15.908984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.570 [2024-07-15 20:27:15.913225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:15.922485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:15.922943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:15.922963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:15.922973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:15.923235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:15.923504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:15.923516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:15.923525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:15.927762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:15.937018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:15.937604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:15.937624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:15.937635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:15.937897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:15.938161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:15.938176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:15.938185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:15.942427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:15.951687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:15.952238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:15.952264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:15.952276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:15.952538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:15.952802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:15.952812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:15.952822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:15.957053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:15.966542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:15.967111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:15.967132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:15.967142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:15.967413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:15.967679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:15.967690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:15.967699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:15.971942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:15.981212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:15.981747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:15.981768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:15.981778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:15.982042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:15.982314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:15.982326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:15.982336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:15.986576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:15.995837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:15.996325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:15.996347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:15.996357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:15.996621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:15.996885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:15.996896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:15.996905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:16.001165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:16.010429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:16.010984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:16.011005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:16.011015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:16.011286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:16.011550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:16.011561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:16.011571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:16.015808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:16.025069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:16.025632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:16.025653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:16.025663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:16.025925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:16.026188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:16.026199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:16.026208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:16.030451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:16.039709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:16.040268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:16.040296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:16.040309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:16.040573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.831 [2024-07-15 20:27:16.040839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.831 [2024-07-15 20:27:16.040850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.831 [2024-07-15 20:27:16.040859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.831 [2024-07-15 20:27:16.045099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.831 [2024-07-15 20:27:16.054364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.831 [2024-07-15 20:27:16.054873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.831 [2024-07-15 20:27:16.054894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.831 [2024-07-15 20:27:16.054903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.831 [2024-07-15 20:27:16.055167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.055436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.055448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.055457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.059697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.068957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.069513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.069534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.069544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.069807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.070070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.070081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.070090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.074336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.083593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.084065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.084085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.084096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.084365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.084629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.084644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.084654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.088894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.098155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.098717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.098738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.098747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.099010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.099279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.099291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.099300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.103537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.112790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.113345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.113366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.113376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.113638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.113901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.113912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.113921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.118159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.127420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.127953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.127974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.127984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.128247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.128518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.128529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.128538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.132779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.142029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.142592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.142613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.142623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.142887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.143150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.143161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.143170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.147414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.156673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.157231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.157252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.157269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.157533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.157796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.157807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.157816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.162056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.832 [2024-07-15 20:27:16.171315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.832 [2024-07-15 20:27:16.171849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.832 [2024-07-15 20:27:16.171869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:50.832 [2024-07-15 20:27:16.171879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:50.832 [2024-07-15 20:27:16.172141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:50.832 [2024-07-15 20:27:16.172412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.832 [2024-07-15 20:27:16.172423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.832 [2024-07-15 20:27:16.172433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.832 [2024-07-15 20:27:16.176669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.093 [2024-07-15 20:27:16.185937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.186395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.186415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.186425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.186692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.186956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.186967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.186976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.191219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.200479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.201036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.201057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.201067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.201337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.201601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.201612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.201622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.205859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.215110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.215559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.215580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.215590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.215852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.216116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.216126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.216135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.220382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.229888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.230442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.230463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.230473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.230735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.230999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.231010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.231026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.235268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.244526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.244995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.245016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.245026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.245296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.245560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.245571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.245580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.249824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.259081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.259637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.259657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.259667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.259930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.260194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.260205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.260214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.264459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.273714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.274270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.274291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.274301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.094 [2024-07-15 20:27:16.274565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.094 [2024-07-15 20:27:16.274828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.094 [2024-07-15 20:27:16.274839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.094 [2024-07-15 20:27:16.274848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.094 [2024-07-15 20:27:16.279097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.094 [2024-07-15 20:27:16.288352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.094 [2024-07-15 20:27:16.288910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.094 [2024-07-15 20:27:16.288934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.094 [2024-07-15 20:27:16.288944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.289208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.289476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.289488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.289497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.293734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.302991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.303533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.303554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.303564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.303827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.304090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.304101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.304110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.308357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.317612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.318083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.318103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.318113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.318381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.318645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.318655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.318665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.322901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.332145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.332680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.332701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.332710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.332973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.333241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.333252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.333268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.337506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.346758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.347334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.347376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.347398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.347975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.348347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.348359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.348368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.352598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.361368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.361934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.361955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.361965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.362227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.362497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.362509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.362518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.366755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.375992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.376528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.376569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.376590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.377167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.377681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.377693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.377702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.381937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.390686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.095 [2024-07-15 20:27:16.391239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.095 [2024-07-15 20:27:16.391265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.095 [2024-07-15 20:27:16.391275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.095 [2024-07-15 20:27:16.391538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.095 [2024-07-15 20:27:16.391800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.095 [2024-07-15 20:27:16.391811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.095 [2024-07-15 20:27:16.391820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.095 [2024-07-15 20:27:16.396056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.095 [2024-07-15 20:27:16.405301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.096 [2024-07-15 20:27:16.405835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.096 [2024-07-15 20:27:16.405876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.096 [2024-07-15 20:27:16.405898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.096 [2024-07-15 20:27:16.406461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.096 [2024-07-15 20:27:16.406725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.096 [2024-07-15 20:27:16.406736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.096 [2024-07-15 20:27:16.406745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.096 [2024-07-15 20:27:16.410976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.096 [2024-07-15 20:27:16.419980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.096 [2024-07-15 20:27:16.420504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.096 [2024-07-15 20:27:16.420525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.096 [2024-07-15 20:27:16.420535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.096 [2024-07-15 20:27:16.420797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.096 [2024-07-15 20:27:16.421061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.096 [2024-07-15 20:27:16.421072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.096 [2024-07-15 20:27:16.421081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.096 [2024-07-15 20:27:16.425322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 209529 Killed "${NVMF_APP[@]}" "$@" 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.096 [2024-07-15 20:27:16.434570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.096 [2024-07-15 20:27:16.435097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.096 [2024-07-15 20:27:16.435118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.096 [2024-07-15 20:27:16.435128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.096 [2024-07-15 20:27:16.435399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.096 [2024-07-15 20:27:16.435664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.096 [2024-07-15 20:27:16.435675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.096 [2024-07-15 20:27:16.435685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.096 [2024-07-15 20:27:16.439918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=211418 00:28:51.096 20:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 211418 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 211418 ']' 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.357 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.357 [2024-07-15 20:27:16.449174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.357 [2024-07-15 20:27:16.449652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.357 [2024-07-15 20:27:16.449673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.357 [2024-07-15 20:27:16.449683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.357 [2024-07-15 20:27:16.449948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.450212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.450223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.450233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.454476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.463741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.464298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.464319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.464333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.464596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.464859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.464869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.464878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.469115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.478400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.478964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.478985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.478996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.479265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.479530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.479541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.479550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.483785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.491985] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:28:51.358 [2024-07-15 20:27:16.492037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.358 [2024-07-15 20:27:16.493038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.493524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.493546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.493556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.493819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.494083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.494094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.494104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.498341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.507701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.508123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.508144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.508155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.508429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.508694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.508705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.508714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.512948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.522444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.522999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.523019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.523030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.523299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.523562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.523574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.523584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.527821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.358 [2024-07-15 20:27:16.537080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.537633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.537653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.537663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.537926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.538190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.538201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.538210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.542449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.551699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.552261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.552283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.552293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.552557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.552820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.552830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.552843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.557079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.566328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.566886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.566907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.566917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.567180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.567451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.567463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.567472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.570003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.358 [2024-07-15 20:27:16.571704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.580972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.581544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.581566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.581577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.581840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.582104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.582115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.582124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.586364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.358 [2024-07-15 20:27:16.595615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.358 [2024-07-15 20:27:16.596084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.358 [2024-07-15 20:27:16.596105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.358 [2024-07-15 20:27:16.596115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.358 [2024-07-15 20:27:16.596384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.358 [2024-07-15 20:27:16.596648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.358 [2024-07-15 20:27:16.596659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.358 [2024-07-15 20:27:16.596668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.358 [2024-07-15 20:27:16.600903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.610151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.610715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.610736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.610746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.611009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.611279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.611304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.611313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.615549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.624813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.625421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.625445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.625456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.625720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.625983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.625995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.626004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.630240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.639504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.640088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.640109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.640119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.640388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.640653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.640664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.640674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.644907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.654158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.654676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.654697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.654707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.654979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.655243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.655260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.655270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.659496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.661777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.359 [2024-07-15 20:27:16.661810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.359 [2024-07-15 20:27:16.661821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.359 [2024-07-15 20:27:16.661830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.359 [2024-07-15 20:27:16.661837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.359 [2024-07-15 20:27:16.661876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.359 [2024-07-15 20:27:16.661951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.359 [2024-07-15 20:27:16.661954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.359 [2024-07-15 20:27:16.668747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.669291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.669314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.669325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.669589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.669853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.669864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.669874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.674108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.683372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.683953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.683977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.683987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.684251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.684523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.684534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.684544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.688778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.359 [2024-07-15 20:27:16.698030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.359 [2024-07-15 20:27:16.698593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.359 [2024-07-15 20:27:16.698615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.359 [2024-07-15 20:27:16.698626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.359 [2024-07-15 20:27:16.698890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.359 [2024-07-15 20:27:16.699154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.359 [2024-07-15 20:27:16.699165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.359 [2024-07-15 20:27:16.699175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.359 [2024-07-15 20:27:16.703418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.619 [2024-07-15 20:27:16.712671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.619 [2024-07-15 20:27:16.713214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.619 [2024-07-15 20:27:16.713236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.619 [2024-07-15 20:27:16.713248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.619 [2024-07-15 20:27:16.713518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.619 [2024-07-15 20:27:16.713782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.619 [2024-07-15 20:27:16.713793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.619 [2024-07-15 20:27:16.713803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.619 [2024-07-15 20:27:16.718035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.619 [2024-07-15 20:27:16.727283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.619 [2024-07-15 20:27:16.727825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.619 [2024-07-15 20:27:16.727847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.619 [2024-07-15 20:27:16.727857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.619 [2024-07-15 20:27:16.728121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.619 [2024-07-15 20:27:16.728390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.619 [2024-07-15 20:27:16.728402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.619 [2024-07-15 20:27:16.728411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.619 [2024-07-15 20:27:16.732640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.619 [2024-07-15 20:27:16.741827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.619 [2024-07-15 20:27:16.742369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.619 [2024-07-15 20:27:16.742391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.742401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.742669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.742934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.742945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.742954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 [2024-07-15 20:27:16.747188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.756436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.757016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.757038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.757048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.757317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.757581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.757592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.757602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 [2024-07-15 20:27:16.761830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.620 [2024-07-15 20:27:16.771081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.771566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.771588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.771598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.771860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.772124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.772135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.772146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 [2024-07-15 20:27:16.776386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.785639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.786225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.786245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.786261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.786530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.786796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.786808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.786818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 [2024-07-15 20:27:16.791050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.800298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.800828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.800860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.801123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.801393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.801405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.801414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.620 [2024-07-15 20:27:16.805646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.811046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.620 [2024-07-15 20:27:16.814900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.815381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.815403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.815412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.815674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.815938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.815948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.815957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.620 [2024-07-15 20:27:16.820190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.829428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.829899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.829918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.829928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.830192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.830461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.830473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.830482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 [2024-07-15 20:27:16.834708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.844203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.844742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.844763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.844773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.845034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.845303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.845314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.845323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 [2024-07-15 20:27:16.849560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 [2024-07-15 20:27:16.858806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.859348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.859371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.859381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.859646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.859910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.620 [2024-07-15 20:27:16.859921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.620 [2024-07-15 20:27:16.859931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.620 Malloc0 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.620 [2024-07-15 20:27:16.864163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.620 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.620 [2024-07-15 20:27:16.873407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.620 [2024-07-15 20:27:16.873930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.620 [2024-07-15 20:27:16.873951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df2e90 with addr=10.0.0.2, port=4420 00:28:51.620 [2024-07-15 20:27:16.873961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df2e90 is same with the state(5) to be set 00:28:51.620 [2024-07-15 20:27:16.874223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df2e90 (9): Bad file descriptor 00:28:51.620 [2024-07-15 20:27:16.874492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.621 [2024-07-15 20:27:16.874504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.621 [2024-07-15 20:27:16.874513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.621 [2024-07-15 20:27:16.878755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.621 [2024-07-15 20:27:16.886723] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.621 [2024-07-15 20:27:16.888001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.621 20:27:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 210358 00:28:51.880 [2024-07-15 20:27:17.053843] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:01.866 00:29:01.866 Latency(us) 00:29:01.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.866 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:01.866 Verification LBA range: start 0x0 length 0x4000 00:29:01.866 Nvme1n1 : 15.01 5675.54 22.17 7368.53 0.00 9781.79 647.91 20375.74 00:29:01.866 =================================================================================================================== 00:29:01.866 Total : 5675.54 22.17 7368.53 0.00 9781.79 647.91 20375.74 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:01.866 rmmod nvme_tcp 00:29:01.866 rmmod nvme_fabrics 00:29:01.866 rmmod nvme_keyring 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 211418 ']' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 211418 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 211418 ']' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 211418 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 211418 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 211418' 00:29:01.866 killing process with pid 211418 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 211418 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 211418 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.866 20:27:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.243 20:27:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:03.243 00:29:03.243 real 0m25.861s 00:29:03.243 user 1m1.309s 00:29:03.243 sys 0m6.180s 00:29:03.243 20:27:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:03.243 20:27:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:03.243 ************************************ 00:29:03.243 END TEST nvmf_bdevperf 00:29:03.243 ************************************ 00:29:03.243 20:27:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:03.243 20:27:28 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:03.243 20:27:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:03.243 20:27:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.243 20:27:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:03.243 ************************************ 00:29:03.243 START TEST nvmf_target_disconnect 00:29:03.243 ************************************ 00:29:03.243 20:27:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:03.502 * Looking for test storage... 00:29:03.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:03.502 20:27:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:08.774 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:08.774 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:08.774 Found net devices under 0000:af:00.0: cvl_0_0 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:08.774 Found net devices under 0000:af:00.1: cvl_0_1 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.774 20:27:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:08.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:29:08.774 00:29:08.774 --- 10.0.0.2 ping statistics --- 00:29:08.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.774 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:29:08.774 00:29:08.774 --- 10.0.0.1 ping statistics --- 00:29:08.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.774 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.774 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:09.034 ************************************ 00:29:09.034 START TEST nvmf_target_disconnect_tc1 00:29:09.034 ************************************ 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.034 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.034 [2024-07-15 20:27:34.250620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.034 [2024-07-15 20:27:34.250716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed0cf0 with addr=10.0.0.2, port=4420 00:29:09.034 [2024-07-15 20:27:34.250769] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:09.034 [2024-07-15 20:27:34.250800] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:09.034 [2024-07-15 20:27:34.250819] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:09.034 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:09.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:09.034 Initializing NVMe Controllers 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:09.034 00:29:09.034 real 0m0.123s 00:29:09.034 user 0m0.058s 00:29:09.034 sys 0m0.064s 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:09.034 ************************************ 00:29:09.034 END TEST nvmf_target_disconnect_tc1 00:29:09.034 ************************************ 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:09.034 ************************************ 00:29:09.034 START TEST nvmf_target_disconnect_tc2 00:29:09.034 ************************************ 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=216624 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 216624 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 216624 ']' 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.034 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.035 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:09.035 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.035 20:27:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.294 [2024-07-15 20:27:34.387564] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:29:09.294 [2024-07-15 20:27:34.387619] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.294 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.294 [2024-07-15 20:27:34.505286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.552 [2024-07-15 20:27:34.654233] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.552 [2024-07-15 20:27:34.654307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.552 [2024-07-15 20:27:34.654329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.552 [2024-07-15 20:27:34.654346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.552 [2024-07-15 20:27:34.654361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.552 [2024-07-15 20:27:34.654497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:09.552 [2024-07-15 20:27:34.654613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:09.552 [2024-07-15 20:27:34.654728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:09.552 [2024-07-15 20:27:34.654733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.120 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.120 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:10.120 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 Malloc0 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 [2024-07-15 20:27:35.305930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 [2024-07-15 20:27:35.334481] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=216782 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:10.121 20:27:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.121 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.030 20:27:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 216624 00:29:12.030 20:27:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 [2024-07-15 20:27:37.366659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Read completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.030 starting I/O failed 00:29:12.030 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 [2024-07-15 20:27:37.366860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 [2024-07-15 20:27:37.367143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Write completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 Read completed with error (sct=0, sc=8) 00:29:12.031 starting I/O failed 00:29:12.031 [2024-07-15 20:27:37.367428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.031 [2024-07-15 20:27:37.367555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.367583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.367733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.367753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.367878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.367892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.368058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.368072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.368247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.368269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.368401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.368415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.368548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.368562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.368693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.368706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.368842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.368871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.369037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.369066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.369286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.369316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-07-15 20:27:37.369467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-07-15 20:27:37.369497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.369649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.369679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.369949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.369978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.370127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.370156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.370349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.370378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.370547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.370576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.370713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.370743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.370899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.370927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.371166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.371196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.371367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.371398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.371561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.371574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.371709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.371722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.371833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.371847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.372943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.372956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.373822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.373991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.374151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.374305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.374464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.374650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.374769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.374887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.374901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.375106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.375135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.375303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.375333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.375564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.375594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.375734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.375748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.375917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.375931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.376048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.376061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.376170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.032 [2024-07-15 20:27:37.376183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.032 qpair failed and we were unable to recover it. 00:29:12.032 [2024-07-15 20:27:37.376297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.376312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.376407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.376421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.376528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.376542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.376642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.376656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.376749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.376762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.376876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.376890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.376992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.377006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.377136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.377295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.377326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.377480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.377509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.377703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.377733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.377942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.377971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.033 [2024-07-15 20:27:37.378096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.033 [2024-07-15 20:27:37.378125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.033 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.378247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.378300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.378446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.378475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.378676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.378706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.378859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.378888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.379210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.379239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.379408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.379438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.379593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.379624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.379732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.379751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.379927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.379940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.380971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.380984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.381087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.381100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.381266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.381281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.381464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.381478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.381576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.381589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.381707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.381720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.381885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.381899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.382895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.382993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.383006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.383197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.383226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.383530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.383577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.383794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.383809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.383939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.383953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.384069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.384083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-07-15 20:27:37.384248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-07-15 20:27:37.384267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.384378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.384393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.384496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.384510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.384600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.384614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.384792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.384807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.384976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.384990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.385845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.385870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.386048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.386062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.386152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.386166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.386353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.386383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.386645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.386659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.386764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.386794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.386990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.387150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.387316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.387499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.387654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.387823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.387980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.387994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.388090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.388103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.388216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.388231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.388357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.388372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.388580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.388594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.390096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.390125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.390337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.390353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.390518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.390532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.390674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.390704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.390923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.390953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.391083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.391113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.391265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.391297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.391430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.391460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.391654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.391683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.391888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.391903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.392068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.392098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-07-15 20:27:37.392225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-07-15 20:27:37.392266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.392425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.392456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.392665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.392695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.392842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.392871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.393078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.393237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.393277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.393484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.393513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.393781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.393812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.394018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.394047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.394242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.394282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.394415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.394445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.394782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.394811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.395022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.395058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.395309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.395341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.395538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.395568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.395763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.395793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.395929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.395970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.396136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.396150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.396318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.396332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.396517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.396547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.396690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.396720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.396851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.396880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.397009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.397043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.397276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.397290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.397384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.397398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.397501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.397515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.397748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.397762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.398016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.398030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.398207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.398222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.398409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.398439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.398579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.398608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.398911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.398941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.399143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.399172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.399396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.399427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.399573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.399603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.399855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.399885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.400034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.400064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.400296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.400327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.400468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.400497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.400657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.400688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-07-15 20:27:37.400876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-07-15 20:27:37.400890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.401055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.401068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.401296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.401327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.401537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.401566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.401701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.401715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.401813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.401826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.402018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.402047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.402262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.402294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.402566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.402596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.402750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.402764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.402939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.402953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.403191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.403220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.403557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.403596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.403736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.403765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.403970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.403999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.404147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.404177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.404337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.404368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.404645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.404674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.404869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.404898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.405010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.405024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.405268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.405298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.405421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.405451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.405597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.405626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.405814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.405827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.406014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.406044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.406244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.406283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.406491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.406521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.406765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.406780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.407012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.407025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.407200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.407215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.407318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.407332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.407518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.407548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.407761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.407791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.407985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.408015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.408240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.408263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.408420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.408451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.408742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.408771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.408914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.408953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.409054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.409068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.409187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.409213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.409339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.409352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-07-15 20:27:37.409447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-07-15 20:27:37.409458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.409639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.409651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.409763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.409799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.410054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.410092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.410243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.410304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.410493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.410504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.410735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.410745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.410858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.410868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.410967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.411206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.411217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.411310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.411322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.411503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.411518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.411686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.411696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.411881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.411918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.412087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.412125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.412294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.412337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.412494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.412528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.412679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.412692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.412795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.412809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.412913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.412927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.413111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.413141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.413286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.413317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.413545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.413575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.413726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.413740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.413917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.413948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.414169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.414200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.414355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.414385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.414526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.414557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.414783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.414813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.415027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.415057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.415199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.415229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.415459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.415490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-07-15 20:27:37.415731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-07-15 20:27:37.415760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.415981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.416010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.416190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.416220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.416467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.416498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.416701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.416714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.416812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.417022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.417090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.417412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.417479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.417639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.417672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.417819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.417850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.418058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.418089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.418237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.418282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.418442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.418471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.418706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.418735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.418834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.418848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.418979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.418993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.419156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.419169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.419294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.419324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.419527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.419557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.419772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.419811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.419934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.419948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.420124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.420137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.420306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.420320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.420489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.420504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.420675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.420717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.420934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.420964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.421162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.421191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.421494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.421525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.421787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.422010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.422024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.422209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.422239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.422387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.422417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.422633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.422661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.422866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.422895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.423101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.423130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.423333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.423364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.423558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.423587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.423712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.423742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.423961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.423989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.424225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.424268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.424492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-07-15 20:27:37.424521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-07-15 20:27:37.424640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.424653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.424765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.424779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.425870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.425884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.426055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.426069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.426234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.426270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.426474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.426504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.426771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.426800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.427069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.427098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.427252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.427294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.427581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.427610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.427767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.427795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.427925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.427938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.428037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.428053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.428283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.428313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.428459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.428488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.428653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.428681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.428817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.428846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.428982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.429011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.429307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.429337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.429573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.429602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.429828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.429842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.430020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.430034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.430133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.430162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.430401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.430432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.430560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.430588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.430855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.430884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.431032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.431062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.431277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.431291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.431403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.431417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.431526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.431540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.431633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.431646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.431831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.431860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.432001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.432030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.432164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.432193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.432406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-07-15 20:27:37.432589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-07-15 20:27:37.432620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.432744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.432773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.433030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.433044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.433213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.433242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.433465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.433495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.433760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.433789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.434021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.434050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.434199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.434228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.434531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.434560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.434822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.434851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.435012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.435042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.435176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.435189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.435290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.435307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.435516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.435530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.435759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.435773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.436013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.436042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.436309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.436339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.436558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.436592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.436780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.436794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.436928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.436958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.437192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.437221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.437520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.437550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.437760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.437790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.437931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.437944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.438105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.438118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.438374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.438412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.438554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.438584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.438721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.438750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.438945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.438973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.439113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.439141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.439449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.439480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.439690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.439704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.439880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.440121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.440150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.440296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.440326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.440530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.440559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.440714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.440743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.440969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.440998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.441285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.441299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.441402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.441416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.441523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.441537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-07-15 20:27:37.441667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-07-15 20:27:37.441681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.441826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.441855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.442117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.442145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.442422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.442453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.442687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.442716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.442933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.442962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.443152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.443166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.443330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.443344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.443539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.443552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.443665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.443679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.443778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.443791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.443879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.443899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.444128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.444157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.444445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.444476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.444605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.444633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.444866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.444879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.445047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.445064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.445153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.445166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.445331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.445345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.445510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.445539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.445805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.445834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.446027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.446041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.446213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.446242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.446497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.446527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.446732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.446761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.446885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.446899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.447127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.447156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.447282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.447312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.447507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.447535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.447762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.447776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.447881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.447895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.448115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.448145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.448304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.448335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.448538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-07-15 20:27:37.448567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-07-15 20:27:37.448693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.448722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.449000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.449030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.449271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.449301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.449461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.449490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.449733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.449761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.449899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.449928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.450172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.450186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.450356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.450371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.450524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.450553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.450897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.450967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.451121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.451154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.451394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.451429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.451583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.451614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.451812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.451842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.452107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.452137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.452340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.452356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.452474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.452487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.452669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.452683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.452791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.452804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.453032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.453061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.453250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.453288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.453490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.453519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.453742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.453756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.454058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.454087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.454375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.454405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.454607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.454636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.454746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.454775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.454913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.454942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.455150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.455164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.455367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.455397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.455634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.455663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.455809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.455838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.456039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.456053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.456224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.456253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.456496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.456525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.456670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.456699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.456903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.456917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.457178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.457207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.457372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.457402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.457542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.457571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.457846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-07-15 20:27:37.457875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-07-15 20:27:37.458186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.458200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.458450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.458465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.458642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.458656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.458832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.458845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.459021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.459050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.459195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.459224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.459457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.459487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.459622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.459651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.459844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.459861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.459953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.459994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.460137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.460166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.460309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.460339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.460535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.460563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.460707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.460720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.460922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.460951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.461142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.461171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.461360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.461391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.461531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.461560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.461710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.461739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.461858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.461871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.462042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.462070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.462351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.462381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.462592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.462621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.462826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.462839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.463017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.463046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.463209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.463238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.463452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.463482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.463700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.463729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.463974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.464003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.464225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.464238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.464431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.464445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.464536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.464550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.464828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.464841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.465018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.465031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.465159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.465188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.465402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.465432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.465637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.465667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.465818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.465847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.466035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.466049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.466220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.466234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-07-15 20:27:37.466356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-07-15 20:27:37.466386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.466531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.466560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.466718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.466747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.466918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.466947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.467236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.467250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.467419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.467433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.467674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.467688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.467803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.467832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.468034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.468069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.468280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.468310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.468463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.468491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.468631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.468660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.468835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.468849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.468985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.469014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.469252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.469303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.469460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.469489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.469686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.469716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.469865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.469879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.469988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.470001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.470235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.470273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.470484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.470513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.470655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.470684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.470885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.470914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.471039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.471069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.471366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.471380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.471482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.471495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.471697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.471726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.471926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.471955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.472246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.472436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.472465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.472727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.472756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.472961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.472974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.473083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.473096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.473205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.473219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.473421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.473436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.473533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.473547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.473726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.473756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.473955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.473984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.474115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.474143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.474273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.474303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.474449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.474478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-07-15 20:27:37.474597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-07-15 20:27:37.474626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.474770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.474798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.474945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.474958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.475885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.475900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.476003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.476017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.476128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.476157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.476323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.476353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.476610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.476639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.476864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.476893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.477024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.477054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.477282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.477296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.477412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.477426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.477689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.477718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.477914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.477943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.478147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.478161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.478292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.478306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.478408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.478422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.478590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.478619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.478761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.478790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.478940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.478969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.479231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.479284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.479557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.479586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.479715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.479744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.479893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.479923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.480055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.480069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.480171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.480184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.480390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.480420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.480616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.480645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.480895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.480968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.481217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.481239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.481438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.481450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.481653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.481664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.481870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.481881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-07-15 20:27:37.482050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-07-15 20:27:37.482088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.482245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.482300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.482535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.482572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.482832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.482872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.483143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.483180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.483317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.483328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.483567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.483578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.483696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.483706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.483795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.483809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.483969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.484001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.484157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.484190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.484343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.484374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.484496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.484525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.484728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.484758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.484899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.484928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.485055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.485084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.485331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.485361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.485506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.485535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.485744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.485773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.485926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.485955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.486224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.486262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.486462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.486492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.486634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.486664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.486861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.486890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.487130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.487160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.487338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.487370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.487507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.487537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.487819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.487848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.487987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.488016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.488280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.488310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.488501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.488531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.488742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.488771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.488905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.488919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.489227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.489263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.489399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.489428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.489682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.489751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.489981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-07-15 20:27:37.490014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-07-15 20:27:37.490201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.490245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.490486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.490517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.490716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.490745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.491031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.491060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.491278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.491309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.491526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.491539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.491698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.491711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.491872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.491886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.492003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.492016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.492195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.492224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.492445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.492476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.492699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.492733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.492944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.492974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.493293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.493323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.493481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.493510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.493704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.493733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.493922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.493935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.494081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.494110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.494235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.494272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.494504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.494533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.494796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.494825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.494958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.494987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.495131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.495160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.495366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.495396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.495605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.495634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.495868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.495898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.496039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.496068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.496208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.496221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.496328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.496343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.496526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.496555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.496843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.496872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.497007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.497044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.497301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.497341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.497538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.497567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.497715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.497744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.497938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.497967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.498171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.498200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.498418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.498432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.498628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.498663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.498884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.498900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.498994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-07-15 20:27:37.499008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-07-15 20:27:37.499091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.499105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.499333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.499349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.499528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.499542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.499647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.499676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.499966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.499996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.500215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.500245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.500437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.500467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.500683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.500713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.500833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.500847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.501019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.501033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.501229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.501243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.501435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.501467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.501735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.501764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.501894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.501908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.502027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.502041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.502216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.502246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.502485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.502516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.502735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.502765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.502952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.502981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.503138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.503169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.503315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.503347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.503564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.503594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.503744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.503773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.503917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.503931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.504129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.504159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.504303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.504334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.504572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.504602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.504733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.504763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.504931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.504945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.505113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.505151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.505352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.505383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.505533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.505564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.505705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.505735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.505945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.505975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.506112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.506141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.506289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.506304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.506468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.506482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.506642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.506659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.506757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.506772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.507056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.507085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.507232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-07-15 20:27:37.507274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-07-15 20:27:37.507413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.507442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.507645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.507674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.507807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.507836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.508102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.508133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.508341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.508356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.508447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.508461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.508628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.508658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.508871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.508901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.509116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.509145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.509339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.509353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.509561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.509590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.509819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.509849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.510060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.510089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.510323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.510353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.510531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.510560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.510828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.510858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.511001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.511015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.511179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.511193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.511413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.511444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.511653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.511682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.511865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.511907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.512026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.512041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.512240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.512281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.512452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.512482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.512621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.512651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.512774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.512803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.513009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.513038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.513186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.513215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.513346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.513360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.513533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.513571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.513836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.513866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.513999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.514028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.514248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.514287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.514495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.514525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.514722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.514751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.514887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.514916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.515059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.515099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.515322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.515354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.515563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.515578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.515692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.515721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.515985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-07-15 20:27:37.516015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-07-15 20:27:37.516226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.516264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.516475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.516505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.516660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.516689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.516904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.516933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.517077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.517107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.517310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.517339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.517560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.517590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.517722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.517752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.518018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.518048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.518186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.518200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.518299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.518314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.518510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.518524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.518686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.518699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.518818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.518831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.519043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.519073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.519206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.519235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.519464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.519494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.519681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.519710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.519865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.519896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.520091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.520121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.520287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.520318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.520514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.520528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.520707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.520737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.520885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.520914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.521047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.521077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.521295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.521337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.521520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.521534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.521656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.521686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.521991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.522020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.522226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.522266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.522471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.522501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.522714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.522744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.522893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.522907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.523111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-07-15 20:27:37.523141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-07-15 20:27:37.523291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.523322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.523472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.523507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.523723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.523752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.523888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.523919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.524137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.524166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.524369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.524401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.524598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.524628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.524818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.524847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.524983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.524996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.525093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.525107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.525216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.525247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.525392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.525422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.525632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.525662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.525824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.525853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.525971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.525985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.526170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.526201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.526363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.526393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.526609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.526637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.526783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.526812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.526936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.526976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.527064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.527077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.527234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.527287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.527392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.527421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.527634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.527663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.527787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.527815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.528105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.528147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.528341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.528372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.528503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.528532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.528676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.528706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.528850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.528879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.529079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.529110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.529232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.529246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.529437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.529469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.529675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.529704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.529901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.529930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.530238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.530252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.530531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.530545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.530632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.530646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.530811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.530847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.531045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.531073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-07-15 20:27:37.531272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-07-15 20:27:37.531301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.531503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.531520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.531655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.531686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.531901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.531931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.532162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.532191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.532316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.532347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.532498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.532526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.532795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.532825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.533041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.533056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.533174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.533204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.533424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.533455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.533589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.533618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.533745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.533775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.533934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.533963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.534180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.534210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.534361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.534376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.534484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.534498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.534659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.534672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.534933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.534962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.535109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.535139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.535289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.535326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.535497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.535511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.535619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.535649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.535792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.535822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.536060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.536090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.536295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.536327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.536528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.536556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.536771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.536799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.537048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.537077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.537288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.537302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.537486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.537516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.537652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.537681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.537889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.537919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.538044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.538057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.538220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.538235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.538521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.538552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.538676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.538706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.538928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.538957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.539088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.539117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.539242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.539279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.539562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.539592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-07-15 20:27:37.539790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-07-15 20:27:37.539824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.540015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.540044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.540250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.540290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.540574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.540603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.540867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.540896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.541101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.541114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.541239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.541279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.541570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.541599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.541762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.541792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.542031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.542061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.542265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.542279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.542449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.542463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.542588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.542603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.542792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.542821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.542980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.543009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.543206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.543234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.543446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.543460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.543627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.543656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.543860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.543889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.544085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.544114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.544345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.544376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.544513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.544542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.544725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.544753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.544949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.544980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.545090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.545133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.545326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.545340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.545545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.545559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.545677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.545691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.545856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.545886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.546018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.546046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.546223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.546263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.546452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.546466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.546652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.546681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.546819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.546848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.547051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.547082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.547288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.547302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.547491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.547505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.547611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.547624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.547742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.547772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.548055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.548085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-07-15 20:27:37.548302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-07-15 20:27:37.548338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.548445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.548459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.548619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.548654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.548919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.548948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.549143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.549171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.549370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.549384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.549596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.549625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.549772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.549802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.550035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.550065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.550215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.550229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.550426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.550440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.550602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.550616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.550800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.550828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.550984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.551148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.551377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.551498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.551611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.551719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.551898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.551911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.552086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.552114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.552299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.552330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.552528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.552557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.552712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.552741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.553006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.553035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.553229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.553276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.553544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.553574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.553730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.553760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.553959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.553987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.554146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.554160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.554262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.554278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.554447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.554475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.554636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.554664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.554869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.554898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.555042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-07-15 20:27:37.555070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-07-15 20:27:37.555205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.555218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.555453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.555484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.555689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.555718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.555858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.555887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.556027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.556055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.556267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.556300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.556465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.556479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.556586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.556623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.556830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.556860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.557948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.557978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.558095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.558124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.558267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.558298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.558497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.558511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.558701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.558731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.558911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.558939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.559068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.559098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.559302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.559332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.559548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.559577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.559702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.559731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.559936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.559966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.560098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.560128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.560222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.560241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.560461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.560476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.560593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.560606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.560767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.560781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.560897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.560926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.561128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.561158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.561297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.561329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.561486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.561500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.561595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.561608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.561718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.561733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.561913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.561956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.562173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.562202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.562351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.562381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.562612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.331 [2024-07-15 20:27:37.562642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.331 qpair failed and we were unable to recover it. 00:29:12.331 [2024-07-15 20:27:37.562782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.562811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.563019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.563049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.563175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.563188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.563297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.563313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.563502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.563519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.563626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.563656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.563951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.563981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.564127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.564156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.564350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.564364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.564465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.564479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.564737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.564780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.564994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.565023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.565243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.565284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.565488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.565503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.565630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.565659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.565789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.565819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.565962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.565992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.566277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.566309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.566537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.566567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.566764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.566793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.566914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.566943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.567099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.567128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.567271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.567302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.567460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.567474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.567635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.567674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.567832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.567862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.568019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.568049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.568172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.568202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.568417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.568441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.568685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.568699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.568831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.568869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.568969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.568983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.569100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.569116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.569227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.569241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.569339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.569361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.569524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.569536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.569663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.569701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.569972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.570010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.570177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.570214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.570378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.570389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.570595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.332 [2024-07-15 20:27:37.570606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.332 qpair failed and we were unable to recover it. 00:29:12.332 [2024-07-15 20:27:37.570696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.570707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.570831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.570841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.571895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.571932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.572156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.572192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.572389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.572401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.572512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.572523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.572617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.572627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.572768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.572779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.572950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.572988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.573213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.573250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.573436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.573473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.573629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.573663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.573794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.573824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.574042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.574071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.574291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.574322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.574489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.574503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.574665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.574694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.574830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.574859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.575058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.575086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.575319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.575349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.575545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.575575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.575809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.575839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.576955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.576992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.577127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.577156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.577296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.577327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.577454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.577483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.577663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.577676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.577784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.577798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.333 qpair failed and we were unable to recover it. 00:29:12.333 [2024-07-15 20:27:37.577889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.333 [2024-07-15 20:27:37.577902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.578044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.578059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.578296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.578326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.578552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.578587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.578796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.578826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.578970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.578999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.579224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.579252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.579403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.579432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.579624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.579639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.579833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.579862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.580006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.580034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.580200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.580228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.580438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.580452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.580613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.580627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.580829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.580842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.581012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.581040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.581235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.581271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.581564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.581594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.581719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.581748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.581962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.581991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.582133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.582161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.582455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.582470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.582640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.582657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.582841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.582871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.583065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.583094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.583229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.583265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.583404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.583433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.583681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.583710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.583931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.583960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.584171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.584198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.584471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.584485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.584669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.584697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.584825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.584855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.585013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.585042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.334 qpair failed and we were unable to recover it. 00:29:12.334 [2024-07-15 20:27:37.585240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.334 [2024-07-15 20:27:37.585297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.585613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.585643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.585850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.585880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.586971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.586984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.587176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.587206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.587420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.587451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.587605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.587634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.587837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.587866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.587994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.588023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.588218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.588232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.588387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.588401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.588662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.588676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.588772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.588786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.588909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.588939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.589078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.589107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.589322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.589351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.589576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.589589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.589765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.589796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.589917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.589947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.590142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.590171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.590310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.590324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.590484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.590499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.590603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.590617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.590707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.590721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.590828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.590841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.591042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.591056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.591220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.591234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.591355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.591384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.591612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.591642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.591787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.591816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.592108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.592137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.592366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.592592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.592605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.592732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.335 [2024-07-15 20:27:37.592761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.335 qpair failed and we were unable to recover it. 00:29:12.335 [2024-07-15 20:27:37.592954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.592984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.593279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.593311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.593438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.593468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.593597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.593625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.593764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.593794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.593922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.593951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.594185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.594214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.594363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.594377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.594552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.594569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.594678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.594692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.594956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.594986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.595976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.595989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.596088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.596102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.596272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.596287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.596458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.596486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.596702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.596730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.596945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.596975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.597193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.597207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.597368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.597383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.597569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.597600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.597795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.597824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.598950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.598964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.599142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.599156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.599345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.599359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.599491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.599519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.599796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.599826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.600034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.600063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.600273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.600304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.336 [2024-07-15 20:27:37.600496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.336 [2024-07-15 20:27:37.600510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.336 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.600683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.600697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.600861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.600874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.601068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.601082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.601197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.601211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.601411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.601451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.601664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.601693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.601901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.601930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.602082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.602110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.602266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.602298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.602507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.602537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.602746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.602776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.602984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.603014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.603321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.603352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.603508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.603522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.603756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.603770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.603864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.603877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.604953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.604981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.605245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.605306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.605502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.605532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.605668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.605697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.605961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.605990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.606119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.606147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.606294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.606323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.606521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.606535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.606709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.606724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.606826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.606839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.607076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.607106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.607390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.607421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.607560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.607576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.607668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.607681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.607784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.607813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.607950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.607978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.608193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.608221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.608422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.608437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.337 [2024-07-15 20:27:37.608645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.337 [2024-07-15 20:27:37.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.337 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.608832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.608860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.609075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.609103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.609311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.609341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.609531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.609561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.609761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.609790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.609948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.609977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.610120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.610149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.610297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.610327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.610461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.610491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.610690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.610719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.610881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.610909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.611171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.611199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.611418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.611448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.611585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.611613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.611809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.611838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.611974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.612004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.612205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.612234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.612478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.612508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.612662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.612691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.612903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.612932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.613140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.613154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.613245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.613268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.613498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.613512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.613695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.613725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.613937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.613967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.614110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.614124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.614386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.614417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.614616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.614645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.614791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.614819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.614959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.614987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.615143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.615172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.615423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.615453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.615653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.615682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.615835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.615868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.615996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.616024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.616153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.616183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.616464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.616494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.616732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.616745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.616843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.616856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.617089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.617103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.338 [2024-07-15 20:27:37.617264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.338 [2024-07-15 20:27:37.617278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.338 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.617473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.617502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.617646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.617675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.617889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.617918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.618134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.618162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.618370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.618384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.618477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.618491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.618602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.618616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.618782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.618796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.618974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.618988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.619079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.619092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.619277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.619291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.619393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.619406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.619521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.619535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.619766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.619779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.619900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.619914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.620898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.620912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.621973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.621987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.622185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.622200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.622365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.622380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.622555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.622572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.622735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.339 [2024-07-15 20:27:37.622749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.339 qpair failed and we were unable to recover it. 00:29:12.339 [2024-07-15 20:27:37.622850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.622863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.622958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.622971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.623137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.623152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.623314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.623328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.623490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.623504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.623593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.623607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.623723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.623737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.623909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.623923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.624024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.624037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.624146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.624160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.624260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.624273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.624504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.624521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.624622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.624635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.624841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.624855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.625027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.625041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.625151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.625165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.625336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.625351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.625525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.625540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.625703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.625717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.625978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.625992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.626151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.626164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.626266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.626279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.626400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.626413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.626646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.626660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.626820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.626833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.626948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.626963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.627195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.627209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.627315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.627328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.627451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.627466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.627639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.627653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.627911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.627925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.628094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.628108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.628269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.628284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.628448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.628462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.628563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.628576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.628680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.628694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.628938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.628952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.629056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.629070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.629197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.629213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.629318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.629332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.340 [2024-07-15 20:27:37.629523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.340 [2024-07-15 20:27:37.629537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.340 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.629628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.629641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.629827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.629842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.630903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.630916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.631007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.631020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.631117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.631131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.631237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.631251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.631370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.631385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.631644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.631658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.631874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.631888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.632070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.632083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.632199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.632212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.632411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.632427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.632598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.632612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.632864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.632879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.633059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.633073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.633190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.633205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.633365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.633379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.633500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.633513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.633699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.633714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.633873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.633903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.634985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.634999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.635177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.635207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.635354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.635383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.635516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.635545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.635684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.635713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.635915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.635953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.636090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.636118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.636269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.636300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.341 [2024-07-15 20:27:37.636523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.341 [2024-07-15 20:27:37.636553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.341 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.636685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.636699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.636873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.636888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.637931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.637944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.638969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.638981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.639181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.639196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.639314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.639329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.639509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.639522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.639620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.639635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.639829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.639842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.639945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.639958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.640252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.640272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.640449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.640463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.640579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.640593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.640826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.640840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.342 [2024-07-15 20:27:37.641069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.342 [2024-07-15 20:27:37.641083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.342 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.641263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.641384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.641394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.641520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.641531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.641712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.641721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.641951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.641960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.642950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.642959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.643051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.643059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.643207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.643216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.643379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.643389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.643655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.643664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.643765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.643774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.643859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.643867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.644018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-07-15 20:27:37.644027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-07-15 20:27:37.644177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.644185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.644277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.644303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.644391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.644400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.644524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.644534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.644693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.644702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.644865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.644875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.645985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.645993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.646860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.646869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.647981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.647991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.648169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.648178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.648355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.648367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.648516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.648525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.648715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.648724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.648815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.648823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-07-15 20:27:37.649802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-07-15 20:27:37.649812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.649919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.650077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.650086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.650238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.650247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.650409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.650419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.650519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.650528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.650721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.650730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.650885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.650894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.651112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.651122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.651277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.651287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.651427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.651436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.651600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.651609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.651742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.651751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.651864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.651873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.652113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.652122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.652299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.652309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.652538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.652548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.652699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.652708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.652862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.652872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.653123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.653281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.653290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.653386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.653394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.653620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.653629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.653716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.653724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.653915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.653951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.654154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.654183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.654398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.654429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.654664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.654832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.654841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.655029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.655038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.655124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.655135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.655377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.655387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.655538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.655546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.655693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.655703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.655927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.655956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.656098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.656126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.656265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.656295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.656446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.656476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.656673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.656702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.656852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.656881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.657086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-07-15 20:27:37.657115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-07-15 20:27:37.657374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.657384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.657544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.657553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.657701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.657711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.657881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.657890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.657969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.657977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.658141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.658150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.658243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.658253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.658355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.658364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.658522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.658531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.658691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.658699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.658867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.658877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.659034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.659043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.659218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.659227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.659324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.659334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.659437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.659446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.659597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.659607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.659812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.659822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.660960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.660969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.661128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.661305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.661400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.661499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.661669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.661827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.661999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.662008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.662171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.662180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.662372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.662382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.662499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.662509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.662753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.662761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.662950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.662958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.663071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.663080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.663228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.663236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.663415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.663425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-07-15 20:27:37.663587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-07-15 20:27:37.663596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.663771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.663780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.663875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.663883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.664832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.664994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.665161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.665247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.665366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.665593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.665762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.665864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.665874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.666050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.666059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.666198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.666208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.666431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.666441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.666620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.666628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.666779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.666788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.666949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.666957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.667042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.667051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.667238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.667247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.667433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.667442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.667521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.667530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.667729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.667737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.667822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.667833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.668052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.668063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.668144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.668152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.668305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.668315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.668395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.668406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-07-15 20:27:37.668570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-07-15 20:27:37.668580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.668732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.668740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.668907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.668916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.669855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.669863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.670940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.670949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.671175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.671266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.671378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.671538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.671828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.671994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.672101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.672262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.672451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.672570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.672674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.672901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.672910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.673901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.673910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.674095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.674105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.674213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.674222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.674460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-07-15 20:27:37.674470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-07-15 20:27:37.674733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.674763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.674971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.675000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.675132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.675161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.675376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.675408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.675620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.675648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.675868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.675878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.675978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.675987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.676212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.676220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.676332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.676341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.676422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.676430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.676581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.676590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.676755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.676778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.676963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.676991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.677194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.677223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.677434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.677443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.677598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.677607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.677850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.677859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.678020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.678029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.678184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.678194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.678309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.678318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.678539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.678547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.678766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.678775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.678938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.678947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.679121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.679150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.679283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.679314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.679445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.679474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.679618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.679648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.679820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.679829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.680102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.680111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.680355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.680365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.680529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.680538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.680761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.680790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.681068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.681097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.681366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.681403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.681622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.681631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.681782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.681792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.681954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.681962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.682217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.682226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.682418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.682427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.682645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.682655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.682842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-07-15 20:27:37.682851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-07-15 20:27:37.683003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.683012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.683178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.683187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.683357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.683367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.683530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.683539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.683793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.683801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.683993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.684002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.684119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.684128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.684287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.684297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.684472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.684480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.684618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.684627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.684900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.685151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.685160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.685265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.685274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.685505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.685514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.685675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.685684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.685786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.685794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.685902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.685912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.686065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.686074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.686404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.686435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.686657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.686689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.686798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.686807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.686955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.686964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.687160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.687170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.687252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.687274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.687361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.687370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.687535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.687544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.687709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.687719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.687875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.687884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.688911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.688922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.689073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.689082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.689329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.689339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.689537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.689546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.689705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.689714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.689806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.689814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.690064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.633 [2024-07-15 20:27:37.690094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.633 qpair failed and we were unable to recover it. 00:29:12.633 [2024-07-15 20:27:37.690360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.690391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.690585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.690594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.690697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.690705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.690878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.690887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.691061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.691071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.691291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.691300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.691468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.691476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.691644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.691653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.691843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.691852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.691953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.691961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.692063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.692071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.692235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.692244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.692413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.692423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.692604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.692613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.692857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.692866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.692981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.692990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.693096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.693106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.693328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.693337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.693587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.693617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.693856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.693885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.694098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.694133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.694458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.694489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.694770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.694779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.694936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.694945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.695061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.695070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.695256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.695266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.695452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.695482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.695778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.695807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.696028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.696058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.696285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.696294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.696402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.696411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.696508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.696518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.696679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.696688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.696852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.696861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.697081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.697090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.697341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.697350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.697499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.697508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.697660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.697669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.697837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.697846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.697991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.634 [2024-07-15 20:27:37.698000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.634 qpair failed and we were unable to recover it. 00:29:12.634 [2024-07-15 20:27:37.698080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.698089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.698174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.698182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.698451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.698461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.698620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.698629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.698873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.698882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.698976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.698984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.699141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.699150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.699313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.699323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.699493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.699502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.699596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.699604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.699748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.699757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.699870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.699880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.700038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.700047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.700219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.700228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.700453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.700483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.700621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.700651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.700844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.700873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.701076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.701085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.701248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.701261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.701407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.701416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.701496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.701506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.701747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.701756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.701926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.701935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.635 [2024-07-15 20:27:37.702966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.635 [2024-07-15 20:27:37.702974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.635 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.703139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.703148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.703413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.703444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.703711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.703741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.703992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.704021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.704303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.704333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.704543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.704573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.704807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.704816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.704907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.704916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.705133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.705142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.705364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.705373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.705592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.705601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.705754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.705763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.705948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.705957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.706116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.706125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.706272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.706282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.706375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.706383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.706627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.706637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.706884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.706893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.706974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.706983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.707106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.707114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.707333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.707343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.707507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.707516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.707760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.707769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.708017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.708026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.708117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.708125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.708227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.708235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.708396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.708406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.708687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.708696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.708852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.708861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.709036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.709047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.709200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.709210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.709302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.709311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.709492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.709501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.709695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.709704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.709930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.709959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.710162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.710192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.710487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.710518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.710809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.710839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.710984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.711014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.636 qpair failed and we were unable to recover it. 00:29:12.636 [2024-07-15 20:27:37.711158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.636 [2024-07-15 20:27:37.711188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.711336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.711367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.711563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.711592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.711875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.711884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.712002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.712011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.712171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.712180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.712398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.712407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.712602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.712611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.712726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.712735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.712982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.712991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.713144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.713152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.713316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.713325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.713581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.713590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.713814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.713844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.714003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.714032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.714184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.714212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.714356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.714386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.714590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.714621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.714828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.714837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.715004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.715013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.715232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.715241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.715412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.715421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.715612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.715620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.715782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.715790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.715941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.715950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.716128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.716137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.716221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.716229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.716396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.716406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.716678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.716687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.716788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.716796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.716896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.716907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.637 [2024-07-15 20:27:37.717832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.637 qpair failed and we were unable to recover it. 00:29:12.637 [2024-07-15 20:27:37.717915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.717924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.718159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.718168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.718331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.718340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.718586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.718595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.718833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.718842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.718961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.718970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.719051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.719060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.719215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.719225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.719376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.719385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.719548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.719557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.719721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.719731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.719890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.719899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.720084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.720093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.720259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.720269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.720416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.720425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.720665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.720675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.720772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.720781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.720948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.720956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.721056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.721065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.721241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.721251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.721475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.721484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.721652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.721660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.721913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.721923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.722111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.722120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.722279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.722289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.722384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.722393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.722546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.722556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.722723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.722732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.722919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.722928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.723174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.723183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.723270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.723279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.723442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.723631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.723642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.723745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.723754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.723922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.723931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.724083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.724092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.724353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.724363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.724458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.724469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.724574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.724583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.724733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.724742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.724910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.638 [2024-07-15 20:27:37.724919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.638 qpair failed and we were unable to recover it. 00:29:12.638 [2024-07-15 20:27:37.725068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.725077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.725227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.725237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.725485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.725494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.725599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.725608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.725820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.725829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.725994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.726004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.726174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.726183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.726360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.726369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.726541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.726550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.726711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.726719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.726873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.726882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.727044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.727054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.727215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.727224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.727559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.727590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.727716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.727746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.727878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.727887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.728031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.728040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.728229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.728238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.728411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.728420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.728593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.728603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.728681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.728689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.728838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.728847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.639 [2024-07-15 20:27:37.729021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.639 [2024-07-15 20:27:37.729030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.639 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.729177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.729186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.729444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.729454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.729681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.729690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.729836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.729845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.729955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.729964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.730066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.730076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.730246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.730264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.730416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.730425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.730569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.730581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.730723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.730731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.730899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.730908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.731990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.731999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.732168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.732177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.732283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.732292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.732513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.732522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.732684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.732693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.732877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.732885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.733138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.733147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.733314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.733471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.733481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.733630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.733640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.733812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.733822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.733925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.733935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.734155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.734164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.734330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.734340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.734582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.734591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.734743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.734752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.735002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.735011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.735160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.735169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.735451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.735461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.735630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.735640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.735803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.735812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.736025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.736034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.736294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.736303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.640 qpair failed and we were unable to recover it. 00:29:12.640 [2024-07-15 20:27:37.736404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.640 [2024-07-15 20:27:37.736415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.736504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.736512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.736675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.736684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.736921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.736930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.737080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.737089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.737271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.737280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.737500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.737508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.737670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.737680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.737925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.737936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.738098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.738107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.738340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.738350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.738503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.738511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.738604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.738612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.738851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.738861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.738960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.738969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.739197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.739206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.739368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.739377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.739543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.739552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.739665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.739674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.739834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.739843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.740092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.740101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.740347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.740357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.740478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.740487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.740730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.740739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.740838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.740847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.741819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.741995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.742003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.742112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.742121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.742283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.742293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.742385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.742393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.742475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.742483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.641 [2024-07-15 20:27:37.742681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.641 [2024-07-15 20:27:37.742690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.641 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.742839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.742848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.743085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.743095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.743269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.743278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.743371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.743380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.743547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.743556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.743735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.743765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.744010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.744040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.744196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.744225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.744444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.744475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.744609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.744639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.744751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.744785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.745005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.745015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.745176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.745185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.745402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.745411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.745588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.745628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.745826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.745854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.746141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.746171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.746481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.746512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.746792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.746802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.747037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.747066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.747281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.747311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.747574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.747604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.747809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.747817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.747979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.747988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.748158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.748167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.748389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.748399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.748563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.748572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.748815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.748824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.748906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.748915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.749064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.749073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.749179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.749188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.749433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.749443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.749719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.749727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.749840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.749849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.750082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.750091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.750173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.750181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.750372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.750381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.750589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.750598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.750751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.750760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.751027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.751056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.642 [2024-07-15 20:27:37.751273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.642 [2024-07-15 20:27:37.751303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.642 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.751449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.751472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.751690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.751700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.751800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.751809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.751958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.751966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.752045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.752053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.752207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.752216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.752304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.752312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.752568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.752578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.752741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.752749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.752970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.752981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.753144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.753153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.753293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.753303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.753463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.753472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.753745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.753754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.753983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.753991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.754156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.754165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.754410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.754420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.754622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.754631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.754723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.754732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.754884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.754893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.755043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.755052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.755220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.755229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.755422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.755432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.755594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.755603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.755772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.755781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.756021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.756029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.756125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.756133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.756300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.756310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.756408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.756417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.756609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.756618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.756774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.756783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.757921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.757930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.758016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.758025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.758194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.758204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.643 [2024-07-15 20:27:37.758445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.643 [2024-07-15 20:27:37.758476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.643 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.758721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.758751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.759009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.759018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.759184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.759192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.759406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.759415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.759606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.759616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.759833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.759842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.759924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.759932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.760026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.760035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.760197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.760207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.760317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.760325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.760484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.760492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.760722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.760751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.760955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.760983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.761126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.761154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.761383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.761413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.761709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.761737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.761941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.761950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.762202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.762231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.762394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.762424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.762557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.762587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.762742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.762751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.762935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.762944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.763207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.763216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.763385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.763395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.763544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.763553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.763825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.763835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.764109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.764118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.764281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.764290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.764396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.764405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.764650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.764659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.764830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.764839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.765009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.765018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.765168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.765177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.644 [2024-07-15 20:27:37.765276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.644 [2024-07-15 20:27:37.765285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.644 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.765460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.765469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.765629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.765639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.765749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.765757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.765949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.765958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.766133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.766162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.766293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.766323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.766474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.766503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.766727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.766737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.766894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.766904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.767066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.767075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.767236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.767245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.767414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.767424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.767591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.767600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.767845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.767870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.768085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.768120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.768269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.768300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.768521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.768529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.768624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.768633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.768810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.768820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.769067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.769076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.769190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.769199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.769353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.769363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.769607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.769616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.769773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.769782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.769935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.769943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.770051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.770062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.770250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.770271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.770515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.770524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.770676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.770685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.770765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.770773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.770888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.770896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.771837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.771846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.772018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.772027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.772103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.645 [2024-07-15 20:27:37.772111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.645 qpair failed and we were unable to recover it. 00:29:12.645 [2024-07-15 20:27:37.772274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.772283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.772435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.772443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.772539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.772547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.772704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.772712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.772812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.772819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.773913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.773921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.774987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.774995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.775144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.775152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.775310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.775319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.775541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.775549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.775770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.775778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.775872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.775880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.775978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.775986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.776219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.776227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.776465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.776474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.776643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.776651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.776814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.776823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.776901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.776909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.777973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.777981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.778127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.778135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.778385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.778393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.778545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.778553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.778716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.646 [2024-07-15 20:27:37.778725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.646 qpair failed and we were unable to recover it. 00:29:12.646 [2024-07-15 20:27:37.778852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.778885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.779071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.779087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.779249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.779271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.779445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.779459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.779585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.779599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.779764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.779779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.779881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.779895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.780064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.780078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.780184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.780194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.780364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.780373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.780447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.780455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.780614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.780622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.780795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.780804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.781032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.781067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.781241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.781277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.781506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.781535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.781683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.781839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.781848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.782095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.782104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.782338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.782347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.782495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.782504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.782749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.782758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.782879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.782908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.783131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.783159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.783357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.783387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.783616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.783644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.783873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.783902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.784132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.784141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.784237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.784246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.784415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.784424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.784633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.784642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.784830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.784870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.785104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.785133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.785339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.785370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.785605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.785634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.785790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.785820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.786026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.786035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.786136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.786145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.786372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.786381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.786603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.786612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.786779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.647 [2024-07-15 20:27:37.786789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.647 qpair failed and we were unable to recover it. 00:29:12.647 [2024-07-15 20:27:37.786987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.787016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.787146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.787175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.787297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.787328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.787503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.787531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.787720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.787748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.787974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.788003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.788148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.788178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.788394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.788425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.788631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.788660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.788822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.788851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.789006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.789035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.789167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.789177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.789423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.789433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.789533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.789643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.789652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.789868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.789877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.790105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.790336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.790502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.790611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.790725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.790905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.790998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.791007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.791232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.791241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.791417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.791426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.791574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.791583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.791853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.791863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.792026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.792035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.792183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.792191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.792307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.792317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.792495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.792504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.792738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.792906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.792935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.793141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.648 [2024-07-15 20:27:37.793169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.648 qpair failed and we were unable to recover it. 00:29:12.648 [2024-07-15 20:27:37.793316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.793326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.793575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.793584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.793829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.793838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.794030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.794039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.794216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.794225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.794405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.794440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.794713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.794742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.795018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.795027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.795222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.795231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.795434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.795444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.795669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.795677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.795895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.795903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.796122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.796131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.796356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.796366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.796519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.796527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.796613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.796622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.796772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.796781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.797005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.797035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.797229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.797267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.797508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.797538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.797832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.797841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.798002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.798011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.798209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.798218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.798377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.798386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.798534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.798543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.798694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.798703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.798927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.798956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.799225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.799264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.799422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.799451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.799771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.799801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.800080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.800088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.800265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.800275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.800503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.800533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.800737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.800746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.800937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.800964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.801210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.801238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.801462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.801491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.801792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.801801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.802037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.802046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.802301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.802331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.802483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.649 [2024-07-15 20:27:37.802512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.649 qpair failed and we were unable to recover it. 00:29:12.649 [2024-07-15 20:27:37.802712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.802742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.803006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.803015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.803174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.803182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.803409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.803418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.803583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.803593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.803861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.803870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.804091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.804100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.804259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.804269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.804507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.804536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.804669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.804699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.804903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.804932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.805134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.805163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.805425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.805456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.805613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.805642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.805872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.805901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.806166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.806196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.806434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.806465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.806696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.806725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.806948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.806978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.807267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.807276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.807454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.807463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.807639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.807648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.807762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.807792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.807994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.808023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.808284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.808315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.808585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.808615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.808820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.808849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.808987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.809026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.809259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.809268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.809433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.809442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.809714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.809723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.809873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.809883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.809964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.809972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.810241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.810250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.810442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.810451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.810610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.810619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.810733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.810742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.810981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.810990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.811210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.811218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.811374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.811383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.650 [2024-07-15 20:27:37.811545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.650 [2024-07-15 20:27:37.811554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.650 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.811821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.811830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.811993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.812002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.812245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.812256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.812362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.812372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.812548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.812702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.812711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.812904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.812913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.813091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.813099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.813347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.813356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.813628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.813637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.813860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.813868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.814040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.814049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.814212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.814221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.814424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9eea0 is same with the state(5) to be set 00:29:12.651 [2024-07-15 20:27:37.814903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.814970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.815199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.815215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.815377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.815393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.815598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.815608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.815804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.815813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.815969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.815978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.816166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.816197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.816411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.816441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.816733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.816762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.816998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.817939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.817949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.818116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.818126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.818388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.818397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.818494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.818502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.818611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.818620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.818782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.818791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.818951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.818960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.819038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.819046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.819192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.819201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.819349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.819359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.819437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.651 [2024-07-15 20:27:37.819445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.651 qpair failed and we were unable to recover it. 00:29:12.651 [2024-07-15 20:27:37.819611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.819620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.819806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.819815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.819967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.819976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.820196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.820205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.820381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.820391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.820571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.820599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.820816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.820845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.820998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.821026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.821154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.821183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.821391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.821421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.821566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.821594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.821858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.821888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.822174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.822204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.822545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.822576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.822861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.822870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.823117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.823126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.823286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.823296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.823548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.823557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.823734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.823743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.823910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.823918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.824137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.824146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.824319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.824328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.824569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.824578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.824743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.824751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.824940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.824949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.825115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.825124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.825277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.825286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.825443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.825452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.825607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.825616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.825843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.825878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.826014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.826043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.826290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.826320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.826529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.826557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.826683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.826712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.826904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.652 [2024-07-15 20:27:37.826933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.652 qpair failed and we were unable to recover it. 00:29:12.652 [2024-07-15 20:27:37.827116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.827125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.827231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.827240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.827404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.827413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.827604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.827613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.827783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.827791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.827971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.828001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.828194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.828223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.828372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.828402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.828558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.828587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.828867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.828897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.829090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.829119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.829372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.829381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.829628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.829637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.829787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.829796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.830938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.830946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.831981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.831989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.832162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.832171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.832319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.832329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.832489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.832498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.832585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.832594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.832740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.832750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.832902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.832912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.833063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.833072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.833237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.833246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.833467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.833477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.833628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.833638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.653 [2024-07-15 20:27:37.833804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.653 [2024-07-15 20:27:37.833812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.653 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.833973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.833983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.834163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.834172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.834403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.834413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.834576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.834585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.834832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.834842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.835055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.835064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.835229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.835238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.835408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.835418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.835687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.835716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.835911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.835940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.836143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.836172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.836380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.836389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.836626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.836635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.836856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.836865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.837038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.837047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.837288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.837319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.837606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.837636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.837924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.837953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.838188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.838210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.838456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.838466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.838582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.838591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.838779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.838788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.838877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.838885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.839106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.839115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.839362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.839371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.839591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.839600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.839709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.839718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.839874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.839883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.840783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.840792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.841012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.841021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.841100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.841108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.841260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.841270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.841378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.841387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.841545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.841555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.654 qpair failed and we were unable to recover it. 00:29:12.654 [2024-07-15 20:27:37.841712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.654 [2024-07-15 20:27:37.841722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.841812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.841820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.841926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.841935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.842040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.842049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.842162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.842171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.842330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.842340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.842518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.842527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.842748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.842758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.842865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.842874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.843064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.843073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.843226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.843235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.843327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.843336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.843570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.843580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.843827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.844026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.844035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.844198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.844207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.844399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.844430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.844586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.844615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.844835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.844865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.844990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.844999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.845156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.845167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.845342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.845351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.845573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.845582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.845749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.845758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.845960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.845990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.846144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.846173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.846468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.846499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.846734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.846763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.847007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.847016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.847169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.847178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.847406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.847437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.847581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.847610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.847755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.847784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.847995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.848015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.848274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.848305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.848519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.848548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.848783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.848812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.848978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.848987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.849148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.849157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.849373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.849403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.849608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.849638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.655 qpair failed and we were unable to recover it. 00:29:12.655 [2024-07-15 20:27:37.849796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.655 [2024-07-15 20:27:37.849826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.850030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.850038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.850192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.850200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.850353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.850363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.850455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.850463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.850681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.850690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.850849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.850858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.851025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.851034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.851286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.851317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.851585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.851614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.851840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.851971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.851980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.852081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.852090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.852262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.852271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.852369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.852378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.852625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.852654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.852945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.852975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.853180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.853189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.853384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.853393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.853666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.853700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.853828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.853858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.854120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.854149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.854348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.854379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.854503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.854531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.854740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.854770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.854920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.854950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.855147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.855188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.855349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.855358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.855507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.855516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.855595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.855603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.855704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.855712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.855982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.855991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.856143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.856162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.856414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.856423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.856617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.856626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.856889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.856919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.857266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.857297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.857437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.857467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.857615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.857645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.857824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.857833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.656 qpair failed and we were unable to recover it. 00:29:12.656 [2024-07-15 20:27:37.858078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.656 [2024-07-15 20:27:37.858108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.858317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.858349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.858587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.858617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.858903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.858934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.859234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.859243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.859415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.859424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.859631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.859641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.859859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.859868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.860042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.860051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.860303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.860334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.860632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.860661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.860935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.860944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.861099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.861261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.861370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.861458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.861582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.861835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.861996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.862949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.862958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.863068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.863077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.863298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.863308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.863525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.863534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.863632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.863643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.863754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.863763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.863973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.863983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.864216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.864224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.657 [2024-07-15 20:27:37.864325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.657 [2024-07-15 20:27:37.864334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.657 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.864520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.864540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.864706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.864715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.864922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.864930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.865979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.865989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.866959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.866968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.867188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.867197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.867274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.867283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.867432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.867440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.867617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.867627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.867722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.867730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.867883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.867893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.868115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.868124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.868223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.868232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.868384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.868394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.868581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.868590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.868759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.868768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.868990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.868999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.869233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.869242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.869541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.869550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.869702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.869711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.869872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.869881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.870079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.870240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.870354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.870525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.870711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.658 [2024-07-15 20:27:37.870821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.658 qpair failed and we were unable to recover it. 00:29:12.658 [2024-07-15 20:27:37.870932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.870941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.871130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.871140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.871239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.871247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.871345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.871353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.871448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.871456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.871617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.871626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.871855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.871864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.872085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.872269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.872278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.872460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.872470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.872631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.872641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.872902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.872911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.873114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.873123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.873387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.873396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.873620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.873629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.873723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.873731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.873833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.873841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.874968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.874977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.875956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.875965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.659 [2024-07-15 20:27:37.876124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.659 [2024-07-15 20:27:37.876133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.659 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.876298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.876308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.876476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.876485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.876571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.876579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.876724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.876733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.876984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.876994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.877189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.877218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.877385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.877415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.877637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.877667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.877947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.877956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.878173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.878182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.878347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.878356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.878508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.878517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.878736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.878745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.878825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.878834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.879893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.879903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.880120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.880149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.880411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.880456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.880580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.880610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.880870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.880900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.881169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.881198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.881412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.881443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.881743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.881752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.881912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.881921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.882160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.882189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.882434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.882465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.882675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.882705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.882910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.882919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.883072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.883119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.883325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.883355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.883563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.883592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.883734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.883743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.883851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.883860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.884042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.884052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.884153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.884162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.884349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.884358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.884603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.884612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.884774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.884783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.885000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.885009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.885173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.885182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.885284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.885292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.885384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.885392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.885622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.885652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.885919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.885949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.886172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.886202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.886443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.886473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.886677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.886707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.886915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.886945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.887141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.887169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.887435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.887444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.887593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.887601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.887818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.887827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.887995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.888004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.888252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.660 [2024-07-15 20:27:37.888290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.660 qpair failed and we were unable to recover it. 00:29:12.660 [2024-07-15 20:27:37.888555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.888585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.888748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.888781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.888930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.888939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.889963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.889971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.890183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.890192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.890362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.890372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.890553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.890563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.890659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.890668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.890909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.890920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.891012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.891020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.891208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.891217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.891370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.891379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.891550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.891560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.891660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.891669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.891927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.891937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.892051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.892060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.892309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.892318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.892536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.892546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.892792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.892801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.892988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.892997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.893274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.893305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.893466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.893496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.893704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.893733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.893965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.893994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.894100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.894109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.894205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.894214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.894433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.894442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.894666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.894675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.894784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.894792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.894910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.894920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.895029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.895037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.895282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.895291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.895366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.895375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.895479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.895488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.895636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.895645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.895797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.895807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.661 [2024-07-15 20:27:37.896957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.661 [2024-07-15 20:27:37.896965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.661 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.897127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.897137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.897284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.897293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.897548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.897578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.897888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.897923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.898115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.898145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.898346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.898377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.898580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.898609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.898746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.898775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.899050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.899059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.899216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.899225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.899476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.899507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.899726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.899756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.899982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.900012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.900313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.900322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.900551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.900560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.900740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.900749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.900996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.901026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.901237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.901290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.901490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.901519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.901738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.901767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.902073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.902082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.902269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.902279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.902415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.902424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.902603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.902613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.902772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.902781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.903007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.903036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.903241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.903279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.903547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.903576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.903809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.903838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.904039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.904068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.904350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.904386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.904653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.904669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.904868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.904882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.905069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.905083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.905304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.905319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.905520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.905534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.905662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.905676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.905846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.905859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.906035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.906065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.906278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.906308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.906498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.906527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.906717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.906746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.907048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.907077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.907290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.907329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.907594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.907623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.907896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.907925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.908058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.908327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.908342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.908610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.908639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.662 [2024-07-15 20:27:37.908852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.662 [2024-07-15 20:27:37.908880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.662 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.909078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.909107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.909304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.909318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.909578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.909592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.909696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.909710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.909829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.909843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.910022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.910036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.910299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.910330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.910630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.910660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.910878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.910892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.911078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.911092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.911330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.911364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.911678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.911708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.911857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.911886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.912080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.912110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.912393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.912403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.912592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.912601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.912795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.912804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.912915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.912924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.913105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.913115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.913367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.913376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.913613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.913623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.913798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.913807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.913982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.914015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.914281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.914311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.914552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.914581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.914887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.914916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.915118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.915148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.915296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.915327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.915480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.915510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.915666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.915695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.915931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.915961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.916218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.916227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.916468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.916478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.916695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.916705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.916878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.916907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.917125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.917423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.917454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.917683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.917713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.917925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.917955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.918162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.918171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.918236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.918244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.663 qpair failed and we were unable to recover it. 00:29:12.663 [2024-07-15 20:27:37.918501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.663 [2024-07-15 20:27:37.918511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.918760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.918769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.918931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.918941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.919192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.919201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.919380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.919412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.919560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.919589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.919810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.919839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.920026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.920034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.920290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.920321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.920518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.920548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.920840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.920869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.921082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.921111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.921441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.921471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.921769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.921798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.921962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.921999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.922218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.922227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.922331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.922340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.922535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.922544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.922659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.922667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.922821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.922830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.923053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.923062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.923307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.923317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.923590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.923599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.923689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.923697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.923873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.923881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.924099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.924107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.924189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.924197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.924291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.924300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.924552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.924561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.924779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.924788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.924949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.924958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.925057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.925065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.925289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.925301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.925486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.664 [2024-07-15 20:27:37.925494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.664 qpair failed and we were unable to recover it. 00:29:12.664 [2024-07-15 20:27:37.925592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.925600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.925790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.925799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.925996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.926005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.926109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.926118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.926199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.926207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.926446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.926456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.926620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.926630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.926800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.926810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.927105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.927134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.927399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.927429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.927743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.927773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.927997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.928026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.928270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.928301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.928496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.928525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.928733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.928762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.929024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.929054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.929206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.929215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.929438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.929469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.929684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.929715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.929868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.929897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.930178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.930187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.930370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.930379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.930603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.930633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.930835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.930864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.931060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.931089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.931305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.931315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.931465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.931474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.931676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.931706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.931911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.931941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.932211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.932220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.932446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.932455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.932630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.932639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.932833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.932862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.933124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.933154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.933302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.933341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.933594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.933603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.933781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.933790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.933962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.933971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.934190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.934201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.934446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.934456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.934674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.934683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.934901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.934910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.935169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.935178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.935351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.935360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.935568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.935597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.935754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.935784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.936078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.936108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.936307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.936316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.936579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.936588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.936833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.936842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.936931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.936939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.937089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.937098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.937324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.937355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.937620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.937650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.937781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.937810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.937998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.665 [2024-07-15 20:27:37.938027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.665 qpair failed and we were unable to recover it. 00:29:12.665 [2024-07-15 20:27:37.938223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.938252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.938466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.938474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.938695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.938704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.938966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.938995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.939206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.939236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.939447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.939477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.939739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.939769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.940078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.940108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.940304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.940335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.940611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.940620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.940787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.940796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.941049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.941079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.941287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.941318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.941608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.941638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.941793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.941822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.942059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.942088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.942294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.942325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.942628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.942657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.942867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.942896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.943183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.943214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.943513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.943543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.943822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.943851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.944010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.944039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.944315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.944325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.944463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.944472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.944634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.944643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.944829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.944838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.944931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.944939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.945099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.945108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.945386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.945416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.945553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.945582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.945794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.945824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.946979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.946989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.947152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.947161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.947262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.947270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.947439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.947448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.947562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.947571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.947720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.947729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.947849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.947858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.948011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.948021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.948171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.948180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.948276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.948285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.948479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.948488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.948768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.948803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.949012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.949042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.949333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.949364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.949602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.949611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.949798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.949807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.950061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.950091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.950336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.950367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.950566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.950596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.950871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.950900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.951032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.951062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.666 [2024-07-15 20:27:37.951310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.666 [2024-07-15 20:27:37.951319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.666 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.951472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.951482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.951599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.951609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.951801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.951810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.951976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.951986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.952150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.952170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.952426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.952457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.952667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.949 [2024-07-15 20:27:37.952697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.949 qpair failed and we were unable to recover it. 00:29:12.949 [2024-07-15 20:27:37.952937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.952966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.953243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.953252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.953375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.953384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.953536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.953545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.953761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.953770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.953956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.953965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.954136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.954146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.954328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.954365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.954575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.954605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.954831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.954861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.955138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.955146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.955230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.955238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.955459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.955468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.955632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.955641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.955756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.955765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.956010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.956019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.956279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.956289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.956449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.956458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.956661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.956691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.957009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.957039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.957250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.957287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.957522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.957552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.957746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.957781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.957956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.957966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.958152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.958161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.958381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.958628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.958658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.958802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.958832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.959059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.959088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.959354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.959385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.959594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.959623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.959905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.959935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.960172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.960201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.960417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.960447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.960714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.960744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.960968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.960998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.961298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.961329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.961618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.961648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.961872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.961901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.962213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.950 [2024-07-15 20:27:37.962222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.950 qpair failed and we were unable to recover it. 00:29:12.950 [2024-07-15 20:27:37.962390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.962399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.962500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.962508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.962674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.962684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.962876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.962885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.962958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.962966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.963222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.963241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.963326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.963335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.963508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.963517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.963612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.963620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.963811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.963820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.964074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.964104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.964361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.964392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.964629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.964658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.964882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.964912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.965173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.965203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.965472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.965502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.965697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.965727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.965866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.965896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.966102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.966133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.966395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.966425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.966723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.966752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.967016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.967327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.967381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.967552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.967560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.967733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.967762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.967976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.968006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.968299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.968308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.968481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.951 [2024-07-15 20:27:37.968490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.951 qpair failed and we were unable to recover it. 00:29:12.951 [2024-07-15 20:27:37.968636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.968646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.968740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.968748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.968926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.968955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.969080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.969109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.969399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.969429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.969578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.969609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.969912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.969941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.970176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.970206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.970411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.970442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.970662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.970691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.970890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.970920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.971161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.971190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.971328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.971337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.971450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.971459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.971705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.971714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.971891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.971900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.972058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.972067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.972227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.972236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.972386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.972395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.972591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.972600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.972770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.972779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.972958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.972988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.973189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.973219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.973519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.952 [2024-07-15 20:27:37.973550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.952 qpair failed and we were unable to recover it. 00:29:12.952 [2024-07-15 20:27:37.973847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.973878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.974068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.974099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.974278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.974308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.974460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.974490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.974787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.974816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.975083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.975112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.975358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.975390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.975525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.975554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.975835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.975865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.976155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.976184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.976442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.976453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.976623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.976632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.976803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.976812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.977097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.977107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.977274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.977299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.977479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.977488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.977668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.977699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.977921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.977950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.978108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.978138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.978347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.978357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.978583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.978613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.978906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.978935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.979127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.979136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.979332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.979341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.953 [2024-07-15 20:27:37.979458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.953 [2024-07-15 20:27:37.979468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.953 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.979651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.979660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.979766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.979775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.979963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.979972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.980154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.980163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.980265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.980275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.980423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.980432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.980686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.980715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.980981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.981011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.981288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.981319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.981635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.981665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.981927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.981956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.982177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.982206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.982443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.982453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.982643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.982652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.982858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.982887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.983086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.983116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.983349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.983379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.983543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.983572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.983778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.983808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.984022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.984051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.984195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.984225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.984363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.984402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.984613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.984622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.984806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.984815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.954 [2024-07-15 20:27:37.985059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.954 [2024-07-15 20:27:37.985089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.954 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.985300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.985337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.985545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.985575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.985868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.985897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.986214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.986224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.986553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.986621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.986852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.986884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.987101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.987131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.987316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.987327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.987563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.987572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.987833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.987863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.988016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.988045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.988241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.988278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.988481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.988490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.988730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.988760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.988908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.988938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.989173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.989203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.989416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.989425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.989681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.989711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.989855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.989884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.955 [2024-07-15 20:27:37.990172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.955 [2024-07-15 20:27:37.990202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.955 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.990391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.990401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.990620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.990629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.990878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.990887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.991048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.991057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.991246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.991258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.991429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.991438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.991553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.991562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.991786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.991796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.991892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.991901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.992125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.992155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.992295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.992326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.992525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.992554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.992789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.992818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.992962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.992992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.993222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.993251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.993516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.993525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.993754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.993763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.993952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.993960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.994215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.994224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.994320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.994329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.956 qpair failed and we were unable to recover it. 00:29:12.956 [2024-07-15 20:27:37.994506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.956 [2024-07-15 20:27:37.994517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.994667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.994701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.995011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.995041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.995309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.995341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.995564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.995593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.995720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.995750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.995969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.995998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.996231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.996269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.996570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.996579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.996732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.996741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.996966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.996996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.997202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.997232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.997436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.997474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.997578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.997586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.997804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.997813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.997908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.997916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.997996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.998004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.998153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.998162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.998331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.998341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.998565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.998595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.998814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.998844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.999053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.999082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.999346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.999377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.999601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.999610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:37.999829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:37.999837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:38.000073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:38.000082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.957 [2024-07-15 20:27:38.000318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-15 20:27:38.000328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.957 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.000417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.000426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.000576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.000585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.000753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.000762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.001022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.001053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.001347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.001385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.001551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.001561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.001806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.001836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.002070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.002100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.002366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.002397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.002691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.002720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.002985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.003014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.003222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.003252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.003427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.003457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.003610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.003645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.003856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.003886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.004081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.004110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.004239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.004278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.004403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.004413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.004639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.004648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.004807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.004816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.004989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.004998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.005218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.005227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.005392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.005401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.005600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.005630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.005847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.005876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.958 [2024-07-15 20:27:38.006020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.958 [2024-07-15 20:27:38.006050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.958 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.006272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.006282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.006481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.006491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.006659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.006668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.006894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.006924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.007191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.007221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.007431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.007461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.007754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.007783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.008080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.008110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.008318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.008348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.008615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.008645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.008856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.008886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.009091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.009120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.009252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.009293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.009584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.009613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.009978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.010046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.010285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.010321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.010535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.010550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.010832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.010862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.011134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.011164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.011326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.011370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.011627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.011641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.011847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.011862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.012127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.012141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.012343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.012353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.959 [2024-07-15 20:27:38.012581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.959 [2024-07-15 20:27:38.012613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.959 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.012897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.012926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.013230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.013266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.013563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.013598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.013891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.013921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.014163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.014193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.014426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.014458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.014739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.014748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.014861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.014871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.015079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.015088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.015278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.015288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.015441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.015450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.015591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.015600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.015759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.015768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.015937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.015946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.016079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.016089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.016261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.016271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.016367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.016375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.016512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.016521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.016690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.016699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.016945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.016954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.017955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.017965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.018904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.018912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.019011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.019019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.019113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.019122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.960 qpair failed and we were unable to recover it. 00:29:12.960 [2024-07-15 20:27:38.019307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.960 [2024-07-15 20:27:38.019317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.019420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.019428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.019609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.019619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.019800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.019809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.019980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.019989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.020203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.020211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.020437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.020448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.020608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.020618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.020862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.020892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.021089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.021119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.021345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.021375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.021513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.021522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.021746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.021776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.022040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.022070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.022286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.022317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.022479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.022510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.022802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.022831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.023033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.023062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.023223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.023253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.023538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.023547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.023775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.023784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.023952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.023962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.024126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.024135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.024316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.024326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.024534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.024543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.024740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.024749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.024982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.025012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.025223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.025260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.025400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.025430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.025677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.025686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.025772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.025780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.025862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.025870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.026042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.026052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.026301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.026311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.961 qpair failed and we were unable to recover it. 00:29:12.961 [2024-07-15 20:27:38.026415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.961 [2024-07-15 20:27:38.026424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.026590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.026600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.026755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.026783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.026986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.027015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.027209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.027239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.027575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.027602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.027817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.027848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.027994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.028023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.028263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.028293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.028494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.028523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.028734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.028764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.029000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.029177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.029422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.029432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.029601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.029610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.029886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.029916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.030110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.030363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.030373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.030536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.030545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.030632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.030640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.030859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.030868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.031024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.031135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.031308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.031479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.031710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.031826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.031995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.032121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.032219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.032329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.032498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.032730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.032886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.032896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.033086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.033094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.033330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.033340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.033435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.033443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.033621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.033630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.033779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.962 [2024-07-15 20:27:38.033789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.962 qpair failed and we were unable to recover it. 00:29:12.962 [2024-07-15 20:27:38.033952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.033962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.034130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.034138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.034333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.034365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.034632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.034662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.034856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.034886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.035125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.035155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.035364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.035395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.035543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.035552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.035653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.035661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.035908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.035917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.036083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.036092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.036290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.036321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.036616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.036646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.036909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.037151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.037180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.037316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.037346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.037639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.037668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.037933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.037962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.038176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.038206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.038463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.038495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.038708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.038737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.038937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.038967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.039199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.039229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.039459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.039468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.039626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.039636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.039869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.039899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.040206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.040237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.040392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.040423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.040670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.040679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.040904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.040913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.041081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.041090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.041251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.041264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.041484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.041493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.041593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.041601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.041829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.041840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.041943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.041953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.042128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.042158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.042352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.042383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.042680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.042710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.042907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.042937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.043135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.963 [2024-07-15 20:27:38.043174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.963 qpair failed and we were unable to recover it. 00:29:12.963 [2024-07-15 20:27:38.043340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.043371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.043553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.043584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.043874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.043904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.044198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.044228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.044454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.044489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.044667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.044681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.044942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.044972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.045251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.045293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.045500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.045530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.045828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.045842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.046073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.046087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.046274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.046289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.046500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.046530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.046780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.046811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.047016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.047046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.047345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.047360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.047600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.047614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.047729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.047743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.047835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.047848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.048048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.048058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.048144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.048153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.048372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.048381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.048530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.048539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.048761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.048770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.048852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.048860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.049009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.049018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.049245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.049258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.049367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.049376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.049538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.049547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.049778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.049807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.050036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.050067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.050302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.050332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.050479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.050509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.050722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.964 [2024-07-15 20:27:38.050752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.964 qpair failed and we were unable to recover it. 00:29:12.964 [2024-07-15 20:27:38.050984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.051014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.051222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.051252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.051530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.051559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.051771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.051780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.051942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.051951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.052037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.052046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.052245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.052288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.052588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.052618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.052814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.052844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.053158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.053188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.053507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.053538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.053757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.053787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.054000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.054030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.054183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.054192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.054274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.054283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.054523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.054532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.054792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.054802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.055019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.055028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.055193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.055223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.055460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.055492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.055712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.055741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.056032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.056062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.056272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.056303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.056574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.056603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.056770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.056779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.056931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.056941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.057165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.057195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.057513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.057544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.057735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.057745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.057972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.057981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.058198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.058207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.058367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.058377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.058530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.058539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.058644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.058653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.058812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.058821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.058971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.058980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.965 qpair failed and we were unable to recover it. 00:29:12.965 [2024-07-15 20:27:38.059153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.965 [2024-07-15 20:27:38.059163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.059260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.059269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.059371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.059381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.059480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.059490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.059637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.059646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.059815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.059825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.059929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.059938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.060113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.060122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.060218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.060229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.060411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.060423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.060575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.060584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.060833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.060862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.061076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.061106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.061270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.061301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.061559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.061568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.061730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.061738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.061928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.061938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.062242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.062295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.062566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.062595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.062793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.062823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.063038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.063068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.063278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.063309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.063532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.063562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.063826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.063835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.063916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.063925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.064112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.064122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.064272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.064282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.064472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.064482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.064670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.064699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.064908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.064938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.966 [2024-07-15 20:27:38.065165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.966 [2024-07-15 20:27:38.065194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.966 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.065330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.065356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.065602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.065611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.065707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.065716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.065913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.065923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.066912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.066920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.067071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.067080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.067159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.067168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.067372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.067382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.067557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.067585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.067791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.067820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.067958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.067988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.068183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.068212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.068449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.068485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.068761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.068791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.069003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.069011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.069261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.069270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.069478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.069487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.069732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.069741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.069982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.070012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.070279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.070309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.070567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.070576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.070688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.070697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.070785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.070794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.071038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.071047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.071159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.071167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.071302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.071312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.071475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.071484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.071590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.071599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.967 qpair failed and we were unable to recover it. 00:29:12.967 [2024-07-15 20:27:38.071705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.967 [2024-07-15 20:27:38.071714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.071983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.072013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.072279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.072309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.072452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.072475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.072626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.072635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.072858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.072867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.072975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.072984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.073180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.073190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.073281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.073290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.073440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.073450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.073617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.073626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.073733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.073743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.073907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.073916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.074013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.074043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.074185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.074215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.074456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.074488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.074780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.074789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.074957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.074966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.075049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.075058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.075298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.075308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.075460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.075470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.075571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.075580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.075754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.075764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.075940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.075970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.076268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.076305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.076598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.076628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.076776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.077012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.077042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.077319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.077350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.077589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.077619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.077816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.077845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.078055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.078085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.078283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.968 [2024-07-15 20:27:38.078315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.968 qpair failed and we were unable to recover it. 00:29:12.968 [2024-07-15 20:27:38.078579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.078608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.078803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.078833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.079100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.079130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.079429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.079461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.079625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.079655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.079929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.079938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.080126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.080136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.080313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.080322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.080473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.080482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.080630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.080639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.080859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.080888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.081102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.081131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.081326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.081357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.081639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.081648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.081817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.081826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.082021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.082051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.082190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.082220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.082492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.082523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.082683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.082692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.082849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.082858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.083073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.083103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.083321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.083353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.083538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.083547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.083714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.083723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.083969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.083978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.084131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.084140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.084375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.084406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.084699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.084730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.084957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.084986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.085199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.085228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.969 [2024-07-15 20:27:38.085565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.969 [2024-07-15 20:27:38.085627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.969 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.085809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.085829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.086089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.086104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.086335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.086346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.086599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.086608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.086853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.086863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.087017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.087027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.087188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.087213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.087378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.087409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.087643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.087672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.087954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.087963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.088124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.088133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.088284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.088294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.088441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.088450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.088630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.088639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.088740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.088749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.088917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.088926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.089014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.089023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.089230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.089270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.089418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.089427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.089582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.089591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.089743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.089752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.089916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.089924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.090127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.090157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.090294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.090325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.090589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.090618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.090757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.090787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.090999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.091028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.091302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.091333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.091562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.091592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.091733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.091742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.091822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.091830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.091976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.091986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.092205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.092214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.092455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.092465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.092732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.092761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.092996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.093026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.093236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.970 [2024-07-15 20:27:38.093274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.970 qpair failed and we were unable to recover it. 00:29:12.970 [2024-07-15 20:27:38.093400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.093409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.093562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.093572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.093837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.093846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.094065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.094076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.094227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.094236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.094348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.094358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.094535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.094545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.094643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.094652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.094848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.094857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.095097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.095126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.095338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.095369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.095520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.095550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.095755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.095764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.095993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.096002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.096249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.096263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.096434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.096443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.096540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.096548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.096734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.096744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.096846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.096856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.097099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.097108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.097279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.097289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.097548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.097558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.097716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.097725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.097873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.097882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.098042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.098051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.098168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.098177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.098427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.098436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.098534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.098543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.098741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.098751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.099000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.099030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.971 [2024-07-15 20:27:38.099363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.971 [2024-07-15 20:27:38.099431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.971 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.099579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.099613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.099830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.099861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.100128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.100158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.100301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.100333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.100547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.100577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.100729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.100743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.100857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.100872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.100984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.100997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.101282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.101297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.101554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.101568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.101745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.101760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.101990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.102023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.102181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.102216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.102451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.102481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.102632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.102641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.102871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.102880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.103958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.103968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.104153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.104182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.104374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.104404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.104636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.104664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.104928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.104938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.105037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.105046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.105226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.105236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.105328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.105337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.105527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.105559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.105772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.105800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.106032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.106062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.106362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.106392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.106578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.106587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.972 qpair failed and we were unable to recover it. 00:29:12.972 [2024-07-15 20:27:38.106736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.972 [2024-07-15 20:27:38.106764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.106918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.106946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.107226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.107266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.107468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.107498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.107645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.107679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.107883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.107897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.108018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.108032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.108134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.108144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.108320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.108352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.108545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.108575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.108770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.108799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.109064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.109093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.109327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.109357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.109596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.109605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.109767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.109776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.110037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.110067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.110314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.110345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.110543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.110578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.110749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.110758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.110934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.110964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.111158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.111187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.111334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.111344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.111504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.111513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.111681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.111690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.111851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.111860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.112007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.112017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.112262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.112271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.112508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.112517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.112684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.112693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.112891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.112920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.113233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.113269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.113467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.113476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.113628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.113638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.113804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.113813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.114114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.114144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.114355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.114386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.114523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.114553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.114775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.973 [2024-07-15 20:27:38.114805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.973 qpair failed and we were unable to recover it. 00:29:12.973 [2024-07-15 20:27:38.115082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.115112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.115385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.115415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.115731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.115761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.115969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.115999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.116183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.116213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.116518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.116549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.116887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.116953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.117203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.117237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.117411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.117442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.117782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.117820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.118139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.118169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.118313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.118344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.118585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.118615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.118832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.118851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.119056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.119072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.119328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.119338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.119571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.119580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.119738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.119747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.119899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.119908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.120124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.120133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.120310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.120320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.120436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.120446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.120526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.120534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.120680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.120689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.120859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.120868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.121143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.121152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.121245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.121258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.121436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.121445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.121548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.121557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.121803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.121832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.122056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.122085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.122239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.122287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.122495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.122524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.122839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.122848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.122998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.123007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.123106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.123114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.974 [2024-07-15 20:27:38.123383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.974 [2024-07-15 20:27:38.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.974 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.123652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.123682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.123822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.123852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.124075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.124105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.124251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.124575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.124605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.124809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.124818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.124915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.124924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.125950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.125959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.126136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.126145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.126332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.126341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.126620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.126649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.126872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.126901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.127217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.127226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.127446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.127456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.127526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.127534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.127699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.127708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.127935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.127965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.128176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.128206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.128489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.128498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.128665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.128674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.128777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.128787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.128941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.128951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.129053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.129061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.129248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.129261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.129366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.129374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.129564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.129573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.129732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.129741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.129890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.129899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.130088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.130097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.130245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.130305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.130581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.130612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.130837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.130867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.131081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.131110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.131305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.975 [2024-07-15 20:27:38.131336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.975 qpair failed and we were unable to recover it. 00:29:12.975 [2024-07-15 20:27:38.131631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.131660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.131981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.131990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.132236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.132245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.132497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.132506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.132603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.132611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.132788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.132797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.132947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.132956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.133209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.133239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.133454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.133484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.133796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.133835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.134098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.134128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.134398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.134429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.134617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.134626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.134792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.134800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.135975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.135984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.136151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.136161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.136384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.136393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.136499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.136508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.136588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.136597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.136766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.136775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.136936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.136945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.137092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.137101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.137200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.137208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.137383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.137393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.137505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.137514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.976 qpair failed and we were unable to recover it. 00:29:12.976 [2024-07-15 20:27:38.137685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.976 [2024-07-15 20:27:38.137695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.137916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.137925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.138071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.138080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.138323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.138333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.138417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.138426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.138654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.138663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.138904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.138934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.139080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.139110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.139250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.139288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.139412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.139421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.139590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.139600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.139820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.139829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.140066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.140075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.140197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.140225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.140619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.140688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.140964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.140980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.141085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.141099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.141329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.141345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.141600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.141619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.141742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.141756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.141983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.141998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.142164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.142178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.142348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.142363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.142618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.142632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.142820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.142834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.143026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.143039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.143185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.143195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.143311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.143320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.143609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.143639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.143904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.143934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.144143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.144172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.144505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.144535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.144818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.144827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.145040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.145049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.145343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.145353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.145504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.145513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.145673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.145681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.145898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.145907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.146113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.146121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.146272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.146283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.146445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.146454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.146651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.146661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.146758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.146767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.146932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.146941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.147096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.147104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.147220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.147230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.147330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.147338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.147493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.147502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.147721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.977 [2024-07-15 20:27:38.147730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.977 qpair failed and we were unable to recover it. 00:29:12.977 [2024-07-15 20:27:38.147892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.147901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.148080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.148089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.148322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.148331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.148422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.148431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.148592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.148601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.148762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.148771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.148940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.148949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.149891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.149899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.150938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.150967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.151121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.151151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.151296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.151325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.151543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.151572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.151775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.151784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.151936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.151945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.152024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.152033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.152116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.152125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.152302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.152311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.152465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.152475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.152624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.152633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.152850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.152859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.153111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.153120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.153287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.153295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.153393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.153402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.153582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.153590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.153768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.153777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.153956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.153965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.154210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.154241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.154530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.154560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.154858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.154867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.154967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.154979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.155147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.155156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.155276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.155286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.155454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.155463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.155553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.155561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.155784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.155794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.155914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.156009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.156018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.156138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.156148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.156261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.156270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.156423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.156431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.978 [2024-07-15 20:27:38.156537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.978 [2024-07-15 20:27:38.156547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.978 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.156695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.156704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.156782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.156790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.157008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.157017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.157177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.157186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.157455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.157465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.157625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.157634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.157737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.157745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.157900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.157909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.158126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.158135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.158355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.158364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.158559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.158568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.158666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.158674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.158824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.158834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.159030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.159039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.159219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.159229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.159445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.159454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.159698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.159707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.159889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.159898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.160156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.160186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.160380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.160410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.160618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.160648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.160935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.160944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.161106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.161115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.161363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.161373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.161561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.161570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.161739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.161748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.161979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.162009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.162162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.162191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.162400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.162430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.162563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.162592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.162793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.162802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.162998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.163201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.163363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.163457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.163684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.163840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.163955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.163966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.164213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.164221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.164387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.164396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.164484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.164493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.164571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.164579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.164807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.164816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.164987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.164996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.165198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.165207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.979 qpair failed and we were unable to recover it. 00:29:12.979 [2024-07-15 20:27:38.165312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.979 [2024-07-15 20:27:38.165321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.165479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.165488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.165742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.165752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.165890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.165899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.166990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.166998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.167102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.167110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.167293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.167303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.167553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.167711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.167720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.167894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.167903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.168824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.168993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.169003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.169152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.169161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.169308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.169317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.169432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.169441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.169588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.169597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.169784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.169793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.170923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.170932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.171962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.171970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.172119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.172127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.172408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.172418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.172520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.172532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.172685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.172693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.172881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.172891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.172997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.980 [2024-07-15 20:27:38.173969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.980 [2024-07-15 20:27:38.173977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.980 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.174133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.174143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.174238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.174247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.174463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.174473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.174559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.174567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.174715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.174724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.174890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.174899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.175178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.175187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.175410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.175419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.175589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.175598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.175750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.175758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.175853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.175862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.175942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.175952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.176097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.176107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.176497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.176507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.176677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.176688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.176766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.176774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.176956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.176965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.177117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.177146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.177371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.177402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.177599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.177628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.177827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.177856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.178024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.178032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.178191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.178200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.178369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.178378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.178528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.178537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.178773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.178782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.178952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.178961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.179131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.179141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.179330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.179339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.179559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.179568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.179787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.179796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.179893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.179902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.180977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.180986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.181097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.181106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.181267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.181276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.181449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.181458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.181565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.181575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.181766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.181775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.181871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.181880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.182043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.182052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.182301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.182332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.182610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.182621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.182715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.182723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.182993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.183002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.183241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.183250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.183370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.183380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.183493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.981 [2024-07-15 20:27:38.183501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.981 qpair failed and we were unable to recover it. 00:29:12.981 [2024-07-15 20:27:38.183581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.183590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.183672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.183682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.183917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.183927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.184949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.184958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.185042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.185050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.185221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.185231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.185401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.185411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.185503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.185511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.185781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.185791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.185946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.185955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.186914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.186923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.187101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.187110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.187277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.187287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.187437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.187447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.187669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.187678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.187853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.187862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.187958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.187967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.188197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.188206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.188368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.188378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.188473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.188482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.188706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.188715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.188855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.188864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.188959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.188969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.189827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.189997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.190006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.190240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.190249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.190418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.190427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.190587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.190597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.190818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.190826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.190903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.190912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.190998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.191079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.191235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.191416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.191577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.191737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.982 [2024-07-15 20:27:38.191845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.982 [2024-07-15 20:27:38.191855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.982 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.191963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.191972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.192895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.192994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.193172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.193267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.193435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.193610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.193802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.193893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.193902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.194069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.194078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.194231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.194240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.194459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.194489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.194630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.194660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.194799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.194828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.195144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.195174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.195382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.195412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.195727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.195757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.196012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.196042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.196313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.196343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.196558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.196588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.196753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.196762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.196989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.196998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.197109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.197117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.197202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.197211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.197363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.197372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.197629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.197639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.197739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.197748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.197832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.197840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.198900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.198908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.199004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.199012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.199107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.199116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.199212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.199221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.199322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.199333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.199488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.199497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.983 [2024-07-15 20:27:38.199760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.983 [2024-07-15 20:27:38.199770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.983 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.199853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.199861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.199949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.199957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.200110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.200119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.200199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.200207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.200360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.200371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.200541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.200552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.200725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.200733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.200889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.200898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.201067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.201075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.201158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.201167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.201427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.201437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.201604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.201613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.201750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.201759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.202002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.202069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.202244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.202294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.202590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.202621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.202743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.202757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.202985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.202999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.203266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.203280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.203485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.203516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.203780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.203794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.203986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.204000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.204160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.204174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.204404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.204419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.204596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.204611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.204710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.204724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.204882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.204896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.205069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.205083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.205267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.205279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.205545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.205574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.205733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.205763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.205965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.205994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.206198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.206228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.206417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.206484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.206705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.206738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.207044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.207074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.207289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.207321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.207487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.207517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.207741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.207770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.208055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.208085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.208232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.208275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.208558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.208589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.208874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.208888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.209956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.984 [2024-07-15 20:27:38.209965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.984 qpair failed and we were unable to recover it. 00:29:12.984 [2024-07-15 20:27:38.210057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.210208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.210312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.210467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.210587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.210761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.210959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.210969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.211115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.211124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.211292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.211301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.211480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.211490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.211641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.211650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.211795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.211804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.211954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.211964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.212960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.212968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.213135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.213145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.213297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.213306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.213462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.213472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.213627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.213636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.213796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.213806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.213897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.213906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.214985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.214994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.215091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.215099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.215269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.215313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.215491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.215527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.215743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.215772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.215980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.215989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.216184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.216193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.216289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.216298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.216523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.216532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.216749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.216758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.217036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.217065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.217278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.217309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.217444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.217473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.217785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.217795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.217970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.217979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.218131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.218140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.218247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.218263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.218375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.218384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.218549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.985 [2024-07-15 20:27:38.218557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.985 qpair failed and we were unable to recover it. 00:29:12.985 [2024-07-15 20:27:38.218657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.218666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.218775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.218784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.218897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.218906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.219142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.219150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.219245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.219259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.219492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.219501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.219675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.219684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.219837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.219847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.220912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.220920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.221017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.221026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.221173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.221182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.221272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.221281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.221388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.221397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.221597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.221607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.221762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.221771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.222020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.222028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.222227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.222235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.222341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.222351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.222533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.222544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.222763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.222789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.223069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.223099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.223296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.223478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.223507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.223701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.223730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.223907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.223916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.224016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.224024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.224172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.224181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.224333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.224342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.224496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.224524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.224648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.224678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.224915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.224944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.225199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.225207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.225373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.225382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.225534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.225543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.225650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.225659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.225739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.225747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.986 [2024-07-15 20:27:38.225913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.986 [2024-07-15 20:27:38.225923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.986 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.226077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.226086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.226184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.226196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.226349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.226358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.226521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.226548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.226768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.226797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.226993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.227023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.227223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.227252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.227394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.227423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.227638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.227667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.227860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.227869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.227964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.227973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.228147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.228155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.228243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.228251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.228432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.228442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.228593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.228634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.228908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.228938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.229198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.229227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.229447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.229477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.229784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.229793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.229904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.229913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.230080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.230090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.230172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.230182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.230459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.230468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.230618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.230627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.230790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.230799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.230978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.230988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.231233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.231242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.231394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.231403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.231504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.231513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.231593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.231601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.231694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.231704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.231871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.231879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.232096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.232105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.232260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.232270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.232434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.232443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.232598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.232607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.232812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.232840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.233029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.233059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.233321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.233351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.233573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.233602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.233715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.233744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.233968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.233977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.234140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.234149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.234332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.234364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.234555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.234584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.234792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.234821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.235076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.235085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.235252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.235264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.235338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.235347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.235423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.235431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.235510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.235519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.987 [2024-07-15 20:27:38.235667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.987 [2024-07-15 20:27:38.235675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.987 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.235844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.235854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.236014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.236023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.236098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.236106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.236261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.236270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.236438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.236447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.236689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.236698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.236891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.236900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.237070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.237079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.237170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.237198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.237415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.237451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.237659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.237668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.237848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.237856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.238048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.238057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.238153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.238183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.238332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.238363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.238477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.238506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.238726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.238756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.238958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.238987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.239240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.239249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.239432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.239441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.239610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.239619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.239788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.239796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.240042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.240051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.240134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.240143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.240372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.240381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.240550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.240559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.240727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.240737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.240833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.240841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.241097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.241126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.241280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.241310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.241441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.241740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.241770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.242008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.242017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.242261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.242271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.242383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.242392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.242572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.242582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.242774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.242804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.243120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.243149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.243341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.243351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.243568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.243577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.243762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.243771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.243870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.243879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.244067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.244076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.244247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.244258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.988 [2024-07-15 20:27:38.244412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.988 [2024-07-15 20:27:38.244421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.988 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.244548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.244577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.244716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.244744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.244883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.244913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.245103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.245133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.245425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.245465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.245681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.245711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.245970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.245979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.246275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.246284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.246446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.246454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.246610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.246619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.246787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.246797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.246983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.246992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.247148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.247157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.247378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.247387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.247545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.247555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.247635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.247644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.247757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.247766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.247986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.247995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.248213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.248222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.248377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.248386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.248487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.248496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.248730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.248739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.249025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.249054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.249318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.249348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.249614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.249643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.249906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.249915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.250068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.250076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.250323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.250332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.250547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.250556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.250816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.250825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.250940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.250949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.251050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.251059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.251158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.251169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.251391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.251401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.251514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.251524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.251742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.251752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.251858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.251867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.252982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.252992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.253870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.989 [2024-07-15 20:27:38.253880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.989 qpair failed and we were unable to recover it. 00:29:12.989 [2024-07-15 20:27:38.254062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.254071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.254246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.254259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.254462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.254472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.254622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.254632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.254738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.254747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.254917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.254926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.255915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.255923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.256980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.256989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.257162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.257171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.257320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.257329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.257421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.257429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.257580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.257589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.257745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.257754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.258091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.258120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.258297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.258329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.258705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.258715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.258858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.258867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.258966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.258974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.259845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.259854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.260014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.260023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.990 [2024-07-15 20:27:38.260105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.990 [2024-07-15 20:27:38.260114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.990 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.260279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.260289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.260531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.260540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.260715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.260724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.260894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.260903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.261124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.261133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.261333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.261343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.261511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.261520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.261699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.261709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.261862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.261871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.261955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.261964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.262131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.262140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.262284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.262294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.262385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.262393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.262610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.262619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.262713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.262722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.262998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.263007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.263089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.263097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.263195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.263204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.263385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.263394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.263639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.263648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.263927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.263936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.264139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.264148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.264362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.264371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.264538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.264548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.264670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.264679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.264919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.264929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.265140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.265149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.265327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.265336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.265495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.265505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.265669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.265680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.265927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.265936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.266152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.266161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.266352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.266363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.266585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.266594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.266857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.266866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.267037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.267046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.267301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.267311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.267495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.267504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.267654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.267663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.267849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.267858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.268100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.268129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.268385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.268416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.268736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.268745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.269915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.269923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.270076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.991 [2024-07-15 20:27:38.270085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.991 qpair failed and we were unable to recover it. 00:29:12.991 [2024-07-15 20:27:38.270244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.270257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.270497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.270507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.270613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.270622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.270867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.270875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.271158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.271167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.271337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.271347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.271526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.271535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.271703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.271712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.271883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.271892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.271988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.271997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.272178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.272187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.272385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.272394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.272568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.272577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.272705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.272715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.272827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.272836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.273072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.273081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.273246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.273260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.273529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.273538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.273658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.273667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.273773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.273782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.273963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.273972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.274234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.274244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.274348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.274358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.274529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.274538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.274720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.274729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.275026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.275035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.275246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.275261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.275564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.275573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.275728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.275737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:12.992 [2024-07-15 20:27:38.275921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.992 [2024-07-15 20:27:38.275930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:12.992 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.276218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.276229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.276412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.276423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.276670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.276679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.276829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.276839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.277025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.277060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.277210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.277240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.277377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.277407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.266 [2024-07-15 20:27:38.277652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.266 [2024-07-15 20:27:38.277681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.266 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.277807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.277836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.278151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.278160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.278317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.278327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.278575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.278584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.278802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.278811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.279000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.279009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.279275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.279306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.279569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.279599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.279868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.279897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.280209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.280239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.280459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.280528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.280784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.280818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.281041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.281056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.281306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.281321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.281582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.281596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.281850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.281864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.282047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.282058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.282331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.282340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.282580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.282589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.282747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.282756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.283018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.283027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.283121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.283130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.283306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.283316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.283561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.283570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.283837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.283846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.284062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.284071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.284325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.284334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.284634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.284643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.284797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.284806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.284996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.285005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.285251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.285264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.285531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.285540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.285792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.285802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.286021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.286030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.286301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.286310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.286479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.286488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.286707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.286717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.286885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.286895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.287131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.287140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.287401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.287410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.267 qpair failed and we were unable to recover it. 00:29:13.267 [2024-07-15 20:27:38.287634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.267 [2024-07-15 20:27:38.287643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.287894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.287903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.288129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.288138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.288392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.288401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.288683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.288692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.288965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.288974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.289188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.289197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.289352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.289362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.289658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.289667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.289861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.289870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.290120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.290131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.290349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.290359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.290548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.290557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.290746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.290755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.290859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.290868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.291037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.291046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.291202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.291211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.291404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.291435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.291719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.291748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.291904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.291934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.292203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.292212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.292456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.292465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.292642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.292650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.292897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.292906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.293008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.293017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.293233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.293242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.293433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.293443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.293700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.293709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.293930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.293939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.294128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.294137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.294378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.294387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.294612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.294622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.294808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.294817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.294933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.294942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.295057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.295066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.295315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.295324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.295543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.295552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.295837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.295867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.296061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.296090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.296368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.296416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.268 [2024-07-15 20:27:38.296717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.268 [2024-07-15 20:27:38.296746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.268 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.297104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.297113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.297262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.297271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.297447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.297456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.297684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.297692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.297843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.297852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.297963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.297972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.298193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.298202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.298394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.298404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.298586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.298595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.298848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.298859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.299106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.299115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.299278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.299287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.299449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.299458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.299677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.299686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.299906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.299915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.300166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.300175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.300341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.300351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.300502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.300512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.300739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.300748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.301029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.301038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.301189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.301198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.301392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.301401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.301660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.301690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.301918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.301948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.302209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.302238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.302516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.302546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.302866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.302895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.303171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.303201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.303504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.303535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.303804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.303833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.304075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.304105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.304408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.304439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.304730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.304759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.305091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.305120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.305395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.305426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.305693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.305722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.306029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.306038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.306291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.306300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.306470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.306479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.306580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.306588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.269 [2024-07-15 20:27:38.306760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.269 [2024-07-15 20:27:38.306769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.269 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.306965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.306974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.307150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.307159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.307477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.307508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.307755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.307784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.308102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.308132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.308406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.308416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.308582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.308590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.308841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.308870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.309071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.309105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.309385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.309416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.309713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.309743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.310006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.310036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.310305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.310336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.310548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.310578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.310848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.310878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.311140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.311149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.311303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.311312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.311559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.311568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.311731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.311741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.312008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.312037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.312250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.312306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.312504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.312534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.312742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.312772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.313061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.313090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.313396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.313427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.313623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.313653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.313936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.313965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.314177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.314205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.314488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.314519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.314733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.314763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.315062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.315092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.315289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.315298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.315495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.315526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.315798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.315827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.316093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.316122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.316341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.316372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.316575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.316605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.316887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.316915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.270 [2024-07-15 20:27:38.317114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.270 [2024-07-15 20:27:38.317122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.270 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.317212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.317221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.317374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.317384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.317606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.317614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.317797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.317807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.318056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.318065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.318157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.318176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.318395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.318404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.318629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.318638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.318865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.318874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.319104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.319115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.319397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.319406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.319510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.319518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.319709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.319718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.319890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.319899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.320144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.320173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.320480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.320511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.320802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.320831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.321124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.321133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.321358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.321368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.321639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.321648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.321897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.321906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.322127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.322135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.322331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.322340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.322534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.322564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.322786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.322816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.323103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.323133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.323431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.323463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.323672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.323681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.323851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.323860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.324084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.324113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.324382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.324413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.324551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.324581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.324876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.324905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.325169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.325199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.325478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.271 [2024-07-15 20:27:38.325509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.271 qpair failed and we were unable to recover it. 00:29:13.271 [2024-07-15 20:27:38.325838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.325868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.326087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.326096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.326374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.326384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.326582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.326591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.326788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.326797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.327077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.327086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.327190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.327200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.327381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.327391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.327563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.327573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.327747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.327755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.327977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.327987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.328217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.328226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.328510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.328520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.328749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.328758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.329003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.329014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.329247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.329260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.329520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.329529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.329682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.329691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.329913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.329922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.330084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.330093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.330334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.330344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.330544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.330553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.330762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.330771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.330994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.331003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.331263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.331272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.331462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.331471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.331641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.331670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.331828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.331858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.332159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.332477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.332487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.332727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.332735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.332955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.332964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.333190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.333220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.333509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.333540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.333760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.333789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.334101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.334110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.334272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.334282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.334530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.334539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.334706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.334716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.334918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.334927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.272 [2024-07-15 20:27:38.335181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.272 [2024-07-15 20:27:38.335190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.272 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.335359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.335369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.335615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.335624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.335877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.335906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.336117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.336146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.336344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.336375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.336581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.336611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.336825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.336854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.337065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.337094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.337386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.337417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.337719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.337748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.338041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.338070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.338368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.338399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.338663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.338693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.338963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.338998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.339272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.339303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.339617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.339647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.339932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.339962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.340331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.340361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.340625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.340655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.340864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.340893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.341169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.341198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.341481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.341490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.341709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.341718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.341968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.341977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.342249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.342262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.342541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.342550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.342768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.342778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.343032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.343042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.343236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.343245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.343409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.343418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.343586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.343614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.343887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.343916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.344152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.344182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.344385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.344394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.344618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.344648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.344856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.344886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.345098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.345128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.345432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.345441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.345661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.345670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.345833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.345842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.273 [2024-07-15 20:27:38.346091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.273 [2024-07-15 20:27:38.346121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.273 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.346273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.346304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.346513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.346542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.346755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.346785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.347103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.347132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.347405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.347414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.347658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.347667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.347889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.347898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.348087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.348096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.348292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.348302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.348496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.348505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.348601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.348609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.348771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.349055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.349089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.349358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.349389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.349704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.349733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.350045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.350074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.350338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.350369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.350699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.350729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.350982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.351012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.351334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.351365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.351637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.351668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.351986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.352016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.352310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.352342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.352639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.352668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.352964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.353288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.353318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.353544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.353574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.353779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.353809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.354077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.354106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.354320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.354351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.354493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.354522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.354718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.354747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.354961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.354991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.355124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.355154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.355372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.355401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.355565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.355574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.355807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.355816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.356051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.356081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.356345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.356377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.356699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.356729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.274 [2024-07-15 20:27:38.357022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.274 [2024-07-15 20:27:38.357052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.274 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.357276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.357306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.357504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.357534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.357826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.357856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.358137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.358146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.358311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.358320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.358485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.358494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.358647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.358656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.358824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.358833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.359172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.359202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.359523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.359532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.359713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.359723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.359972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.359983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.360181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.360190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.360367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.360377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.360609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.360638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.360958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.360988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.361291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.361300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.361558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.361567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.361782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.361792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.362066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.362075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.362232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.362241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.362407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.362416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.362674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.362704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.362947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.362977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.363251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.363289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.363596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.363627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.363905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.363934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.364199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.364229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.364481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.364490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.364644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.364653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.364865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.364895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.365095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.365125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.365336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.365379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.365636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.365645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.275 [2024-07-15 20:27:38.365834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.275 [2024-07-15 20:27:38.365844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.275 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.365931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.365940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.366146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.366155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.366373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.366383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.366552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.366562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.366833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.366862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.367129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.367158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.367465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.367504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.367775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.367785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.367963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.367972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.368171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.368180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.368412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.368422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.368532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.368541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.368649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.368658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.368812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.368822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.369081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.369110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.369362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.369395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.369678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.369715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.369873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.369903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.370139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.370149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.370410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.370420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.370665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.370674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.370831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.370840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.371001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.371010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.371183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.371192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.371427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.371437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.371674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.371684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.371924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.371934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.372193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.372204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.372360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.372370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.372618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.372627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.372798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.372808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.373022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.373031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.373182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.373192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.373442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.373454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.373673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.373683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.373785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.373795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.373991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.374000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.374266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.374275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.374527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.374536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.374701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.374711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.276 [2024-07-15 20:27:38.374881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.276 [2024-07-15 20:27:38.374891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.276 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.375127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.375156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.375388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.375398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.375566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.375576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.375744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.375753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.375883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.375892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.376115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.376125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.376372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.376383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.376613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.376624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.376794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.376804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.377144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.377174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.377476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.377508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.377805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.378137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.378167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.378404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.378445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.378574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.378584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.378683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.378693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.378871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.378880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.379135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.379165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.379459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.379490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.379715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.379746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.379978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.380008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.380243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.380258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.380510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.380541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.380698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.380728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.381044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.381074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.381287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.381319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.381483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.381492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.381611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.381620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.381878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.381889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.382067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.382076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.382331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.382340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.382504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.382515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.382749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.382758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.382920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.382929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.383179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.383188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.383352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.383362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.383513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.383523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.383640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.383650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.383825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.383834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.384015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.277 [2024-07-15 20:27:38.384049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.277 qpair failed and we were unable to recover it. 00:29:13.277 [2024-07-15 20:27:38.384274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.384305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.384517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.384547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.384822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.384854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.385054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.385084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.385370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.385402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.385704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.385713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.385832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.385841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.386109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.386118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.386292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.386302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.386551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.386560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.386814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.386823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.386921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.386929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.387088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.387098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.387268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.387278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.387523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.387554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.387775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.387809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.388058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.388088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.388415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.388444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.388554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.388564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.388715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.388724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.388975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.388984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.389134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.389144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.389427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.389535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.389544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.389667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.389676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.389811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.389824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.390076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.390085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.390335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.390345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.390590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.390600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.390792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.390802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.391114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.391144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.391305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.391335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.391568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.391598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.391901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.391931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.392165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.392194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.392547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.392578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.392796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.392827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.393101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.393131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.393341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.393373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.393647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.393677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.393956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.278 [2024-07-15 20:27:38.393986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.278 qpair failed and we were unable to recover it. 00:29:13.278 [2024-07-15 20:27:38.394303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.394348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.394456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.394466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.394693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.394703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.394928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.394938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.395162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.395171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.395397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.395408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.395631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.395640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.395867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.395877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.396049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.396075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.396349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.396380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.396622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.396652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.396812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.396842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.397064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.397093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.397295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.397304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.397555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.397566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.397842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.397851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.398080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.398089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.398244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.398253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.398501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.398511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.398678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.398688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.398919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.398928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.399100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.399109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.399361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.399371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.399575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.399585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.399751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.399760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.399868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.399877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.400176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.400185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.400462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.400472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.400678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.400688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.400883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.401083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.401092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.401334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.401344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.401500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.401509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.401692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.401701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.401908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.401917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.279 qpair failed and we were unable to recover it. 00:29:13.279 [2024-07-15 20:27:38.402157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.279 [2024-07-15 20:27:38.402166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.402377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.402387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.402488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.402498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.402652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.402662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.402857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.402867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.403115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.403124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.403282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.403293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.403457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.403466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.403643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.403653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.403931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.403941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.404170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.404179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.404429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.404438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.404704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.404714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.404977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.404986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.405141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.405150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.405321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.405331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.405614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.405623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.405904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.405934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.406206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.406235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.406555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.406585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.406731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.406761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.407088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.407117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.407399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.407430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.407747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.407776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.407987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.408016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.408215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.408225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.408456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.408467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.408764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.408774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.409029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.409038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.409282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.409292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.409487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.409496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.409693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.409723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.410008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.410037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.410269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.410300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.410503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.410533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.410830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.410860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.411164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.411194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.411516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.411526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.411784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.411793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.412037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.412047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.412270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.412279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.280 qpair failed and we were unable to recover it. 00:29:13.280 [2024-07-15 20:27:38.412555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.280 [2024-07-15 20:27:38.412565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.412755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.413064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.413073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.413228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.413238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.413461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.413493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.413789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.413824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.414113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.414143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.414436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.414446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.414650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.414680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.414965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.414995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.415300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.415310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.415489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.415499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.415685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.415716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.416018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.416047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.416334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.416344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.416510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.416519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.416773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.416782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.416971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.417007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.417296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.417327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.417632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.417662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.417975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.418005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.418313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.418323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.418560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.418570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.418734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.418744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.418911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.418920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.419097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.419107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.419337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.419368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.419623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.419653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.419952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.419982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.420265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.420275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.420522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.420531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.420726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.420736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.420846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.420856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.421024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.421033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.421294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.421304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.421550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.421559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.421803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.421812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.421967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.421976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.422154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.422163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.422441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.422450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.422630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.281 [2024-07-15 20:27:38.422639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.281 qpair failed and we were unable to recover it. 00:29:13.281 [2024-07-15 20:27:38.422862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.422871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.423084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.423093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.423344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.423354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.423586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.423615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.423884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.423919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.424274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.424305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.424602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.424632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.424919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.424948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.425262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.425293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.425506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.425515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.425767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.425776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.426024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.426033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.426202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.426212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.426441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.426451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.426645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.426655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.426917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.426947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.427165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.427194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.427483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.427514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.427817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.427847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.428059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.428089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.428340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.428350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.428601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.428610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.428759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.428768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.429004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.429034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.429287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.429318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.429535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.429565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.429857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.429887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.430021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.430051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.430366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.430376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.430546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.430556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.430803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.430812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.431054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.431085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.431326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.431357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.431648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.431678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.431846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.431875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.432172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.432202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.432500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.432531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.432730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.432759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.432914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.432943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.433239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.433276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.282 qpair failed and we were unable to recover it. 00:29:13.282 [2024-07-15 20:27:38.433502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.282 [2024-07-15 20:27:38.433511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.433745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.433754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.434004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.434014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.434205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.434215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.434467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.434479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.434642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.434651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.434899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.434908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.435082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.435091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.435267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.435276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.435532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.435564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.435862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.435891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.436110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.436139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.436428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.436438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.436608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.436617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.436810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.436819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.437012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.437041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.437331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.437362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.437586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.437616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.437840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.437870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.438138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.438168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.438376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.438407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.438704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.438733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.439030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.439059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.439361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.439392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.439683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.439713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.439980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.440009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.440220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.440250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.440526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.440556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.440854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.440883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.441107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.441136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.441271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.441301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.441585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.441594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.441917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.441926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.442124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.442153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.442470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.442502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.283 qpair failed and we were unable to recover it. 00:29:13.283 [2024-07-15 20:27:38.442666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.283 [2024-07-15 20:27:38.442675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.442920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.442928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.443209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.443238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.443562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.443866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.443897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.444187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.444217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.444431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.444440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.444688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.444698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.444850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.444870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.445071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.445106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.445325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.445356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.445562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.445592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.445855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.445864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.446109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.446118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.446311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.446320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.446523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.446553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.446836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.446866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.447130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.447160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.447440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.447471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.447778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.447787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.448040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.448049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.448246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.448329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.448553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.448562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.448729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.449015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.449045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.449284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.449315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.449592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.449622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.449835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.449864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.450154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.450183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.450492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.450524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.450807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.450837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.451068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.451098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.451366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.451375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.451526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.451535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.451717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.451726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.451957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.451986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.452201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.452231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.452446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.452456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.452655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.452664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.452912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.452921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.453014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.453022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.284 qpair failed and we were unable to recover it. 00:29:13.284 [2024-07-15 20:27:38.453292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.284 [2024-07-15 20:27:38.453301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.453566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.453595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.453808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.453838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.454132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.454171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.454392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.454401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.454640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.454650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.454819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.454828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.455082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.455112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.455348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.455385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.455596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.455605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.455862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.455891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.456091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.456120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.456365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.456397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.456691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.456721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.457014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.457044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.457267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.457299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.457587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.457616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.457813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.457843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.458137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.458166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.458470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.458502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.458827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.458856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.459092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.459122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.459407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.459439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.459706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.459735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.459974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.460004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.460193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.460202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.460465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.460475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.460641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.460651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.460835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.460866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.461032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.461061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.461299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.461330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.461627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.461657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.461924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.461955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.462225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.462277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.462523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.462532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.462665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.462675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.462922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.462931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.463193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.463222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.463506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.463538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.463784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.463815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.464111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.285 [2024-07-15 20:27:38.464141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.285 qpair failed and we were unable to recover it. 00:29:13.285 [2024-07-15 20:27:38.464441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.464473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.464785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.464815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.465094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.465124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.465381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.465390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.465639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.465648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.465891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.465899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.466051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.466060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.466331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.466367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.466696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.466727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.467048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.467078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.467369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.467401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.467631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.467660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.467955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.467984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.468283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.468315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.468593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.468603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.468849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.468858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.469021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.469030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.469182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.469192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.469496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.469527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.469827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.469857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.470153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.470182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.470479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.470511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.470804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.470833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.471100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.471130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.471410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.471441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.471738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.471767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.471916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.471945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.472161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.472191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.472404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.472414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.472662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.472671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.472917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.472926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.473112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.473121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.473348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.473379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.473701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.473730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.474043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.474081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.474264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.474273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.474461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.474471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.474673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.474702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.474988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.475017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.475286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.475318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.286 qpair failed and we were unable to recover it. 00:29:13.286 [2024-07-15 20:27:38.475638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.286 [2024-07-15 20:27:38.475668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.475962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.475992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.476207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.476237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.476475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.476505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.476795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.476804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.476899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.476908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.477159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.477169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.477344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.477356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.477577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.477586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.477757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.477766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.477931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.477941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.478143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.478172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.478460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.478491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.478704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.478732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.478993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.479023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.479238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.479289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.479581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.479612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.479924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.479954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.480269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.480300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.480517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.480547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.480844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.480873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.481187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.481217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.481534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.481544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.481697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.481706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.481935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.481965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.482207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.482236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.482592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.482623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.482886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.482916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.483234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.483281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.483512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.483521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.483775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.483784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.484054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.484063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.484293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.484303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.484566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.484575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.484830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.484840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.485085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.485094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.485283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.485293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.485495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.485504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.485727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.485757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.486062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.486092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.486387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.486418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.287 qpair failed and we were unable to recover it. 00:29:13.287 [2024-07-15 20:27:38.486684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.287 [2024-07-15 20:27:38.486714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.486992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.487022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.487317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.487349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.487648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.487678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.487970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.487979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.488230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.488240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.488413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.488424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.488585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.488594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.488883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.488913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.489157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.489187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.489418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.489450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.489750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.489780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.490034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.490043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.490307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.490317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.490469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.490478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.490758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.490787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.491003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.491033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.491287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.491318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.491580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.491589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.491837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.491846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.492070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.492079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.492312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.492322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.492487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.492497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.492734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.492764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.493093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.493123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.493347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.493377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.493601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.493631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.493854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.493863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.494110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.494119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.494406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.494416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.494669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.494679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.494952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.494962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.495122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.495131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.495333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.495364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.495679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.495709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.288 [2024-07-15 20:27:38.496009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.288 [2024-07-15 20:27:38.496039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.288 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.496336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.496367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.496664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.496990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.497020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.497226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.497267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.497566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.497596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.497890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.497899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.498128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.498137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.498346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.498356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.498526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.498535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.498765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.498794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.499061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.499096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.499382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.499391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.499564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.499573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.499830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.499860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.500155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.500184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.500393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.500403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.500658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.500687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.500953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.500983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.501186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.501216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.501533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.501543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.501710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.501719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.501900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.501939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.502203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.502233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.502442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.502666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.502675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.502840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.502870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.503083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.503112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.503390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.503400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.503623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.503632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.503855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.503864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.504113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.504122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.504396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.504415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.504571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.504581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.504840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.504870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.505064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.505093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.505344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.505374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.505667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.505677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.505908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.505917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.506141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.506150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.506401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.506411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.289 [2024-07-15 20:27:38.506679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.289 [2024-07-15 20:27:38.506688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.289 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.506877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.506897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.507071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.507080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.507340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.507372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.507589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.507619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.507885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.507915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.508239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.508278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.508495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.508525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.508873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.508903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.509054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.509084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.509386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.509423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.509708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.509738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.510060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.510090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.510367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.510376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.510554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.510563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.510766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.510796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.511007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.511036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.511338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.511369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.511628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.511637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.511839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.511848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.512104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.512113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.512284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.512294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.512550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.512559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.512747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.512756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.512990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.513021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.513326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.513356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.513665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.513695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.514017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.514047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.514323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.514353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.514653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.514682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.514846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.514875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.515165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.515195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.515514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.515546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.515746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.515775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.515974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.516005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.516141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.516171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.516380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.516390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.516621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.516631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.516882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.516891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.516975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.516983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.517211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.290 [2024-07-15 20:27:38.517220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.290 qpair failed and we were unable to recover it. 00:29:13.290 [2024-07-15 20:27:38.517471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.517481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.517740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.517749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.517990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.517999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.518165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.518174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.518406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.518415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.518601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.518610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.518695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.518704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.518857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.518867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.519096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.519105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.519217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.519228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.519428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.519438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.519644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.519674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.520001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.520031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.520331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.520363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.520574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.520603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.520818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.520849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.521048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.521078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.521346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.521377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.521710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.521739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.522034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.522064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.522333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.522364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.522565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.522574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.522860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.522890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.523123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.523153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.523466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.523498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.523715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.523744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.524038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.524068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.524296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.524328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.524621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.524651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.524928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.524937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.525140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.525149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.525428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.525438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.525601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.525611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.525786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.525795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.526079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.526108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.526387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.526418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.526645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.526655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.526935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.526944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.527118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.527128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.527354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.527363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.527536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.291 [2024-07-15 20:27:38.527546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.291 qpair failed and we were unable to recover it. 00:29:13.291 [2024-07-15 20:27:38.527803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.527833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.528099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.528130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.528452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.528484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.528787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.528797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.529051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.529061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.529216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.529226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.529477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.529487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.529640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.529650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.529853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.529889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.530187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.530218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.530492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.530501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.530666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.530676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.530933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.530942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.531216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.531246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.531590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.531621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.531928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.531958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.532250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.532293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.532583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.532592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.532780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.532789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.532947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.532956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.533243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.533284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.533596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.533626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.533928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.533937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.534186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.534195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.534445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.534455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.534661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.534670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.534825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.534834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.535098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.535128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.535364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.535395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.535671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.535700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.535922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.535952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.536172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.536202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.536411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.536442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.536684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.292 [2024-07-15 20:27:38.536693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.292 qpair failed and we were unable to recover it. 00:29:13.292 [2024-07-15 20:27:38.536964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.536973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.537174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.537213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.537408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.537424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.537689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.537703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.537866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.537881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.538072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.538102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.538332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.538363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.538567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.538597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.538895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.538909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.539199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.539214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.539469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.539512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.539736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.539765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.540037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.540067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.540306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.540337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.540635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.540655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.540873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.540888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.541056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.541071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.541189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.541203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.541476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.541491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.541733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.541747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.541993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.542008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.542280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.542295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.542461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.542475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.542685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.542713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.542999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.543030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.543199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.543228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.543528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.543597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.543920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.543958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.544217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.544249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.544567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.544597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.544832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.544862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.545085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.545113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.545384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.545416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.545697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.545711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.545956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.545970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.546206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.546221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.546486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.546501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.546774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.546788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.546997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.547011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.547249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.547268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.547515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.293 [2024-07-15 20:27:38.547529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.293 qpair failed and we were unable to recover it. 00:29:13.293 [2024-07-15 20:27:38.547796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.547813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.547998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.548013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.548225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.548239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.548510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.548540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.548836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.548865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.549164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.549193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.549491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.549521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.549735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.549749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.550004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.550019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.550278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.550293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.550529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.550543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.550777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.550791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.550956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.550970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.551234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.551271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.551603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.551633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.551830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.551844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.552089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.552103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.552398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.552413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.552676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.552691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.552876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.552890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.553117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.553132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.553399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.553414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.553585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.553599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.553846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.553876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.554173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.554203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.554533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.554564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.554863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.554892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.555189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.555220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.555556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.555588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.555874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.555888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.556127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.556141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.556385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.556400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.556577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.556591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.556854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.556883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.557184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.557213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.557461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.557492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.557737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.557766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.558062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.558090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.558356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.558388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.558648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.558678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.558875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.294 [2024-07-15 20:27:38.558909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.294 qpair failed and we were unable to recover it. 00:29:13.294 [2024-07-15 20:27:38.559196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.559225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.559466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.559502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.559804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.559833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.560129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.560159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.560460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.560492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.560784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.560814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.561116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.561146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.561440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.561471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.561682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.561712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.561894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.561908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.562168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.562182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.562367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.562389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.562598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.562613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.562851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.562866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.563043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.563058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.563245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.563284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.563553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.563583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.563887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.563918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.564236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.564277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.564580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.564610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.564849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.564879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.565145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.565175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.565448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.565479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.565706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.565720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.565915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.565929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.566032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.566046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.566229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.566243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.566425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.566440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.566700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.566730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.566999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.567029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.567311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.567342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.567648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.567662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.567795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.567809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.568076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.568106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.568395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.568426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.568684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.568698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.568881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.568895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.569104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.569134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.569436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.569466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.569739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.569756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.570004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.295 [2024-07-15 20:27:38.570019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.295 qpair failed and we were unable to recover it. 00:29:13.295 [2024-07-15 20:27:38.570272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.570287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.570533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.570547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.570799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.570814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.570995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.571009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.571209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.571223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.571487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.571502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.571849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.571864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.572134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.572164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.572461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.572492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.572640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.572670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.572894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.572909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.573102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.573116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.573396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.573411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.573595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.573609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.573901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.573931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.574265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.574296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.574517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.574531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.574766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.574781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.575045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.575059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.575247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.575265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.575544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.575558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.575820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.575835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.576042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.576056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.576320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.576336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.576598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.576613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.576785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.576800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.577058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.577072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.577284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.577299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.577579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.577593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.577854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.577868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.578079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.578093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.578363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.578378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.578613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.578628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.578866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.578880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.579145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.579160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.296 [2024-07-15 20:27:38.579355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.296 [2024-07-15 20:27:38.579371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.296 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.579644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.579658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.579946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.579961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.580217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.580234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.580423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.580439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.580637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.580652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.580813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.580827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.581086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.581117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.581337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.581368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.581638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.581669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.581888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.581918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.582207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.582237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.582458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.582489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.582813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.582843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.583042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.583072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.583366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.583396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.583560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.583590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.583859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.583874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.584139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.584153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.584424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.584439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.584713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.584728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.584963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.584977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.585215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.585229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.585512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.585527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.585693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.585707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.585971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.585985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.586198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.586213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.586395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.586410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.586583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.586598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.586761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.586775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.587048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.587063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.587297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.587312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.587524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.587538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.587651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.587666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.587946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.587960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.588154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.588169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.588294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.588310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.588512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.588528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.588797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.588811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.588993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.589007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.589128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.589142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.297 [2024-07-15 20:27:38.589322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.297 [2024-07-15 20:27:38.589337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.297 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.589500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.589514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.589791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.589826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.590043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.590073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.590346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.590378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.590651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.590666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.590947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.590962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.591217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.591231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.591522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.591537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.591772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.591786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.592022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.592037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.592273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.592288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.592551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.592565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.592667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.592681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.592944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.592958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.593215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.593245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.593514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.593544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.593842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.593871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.594170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.594200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.594498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.594530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.594777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.594807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.595097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.595112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.595349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.595364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.595613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.595628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.595821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.595835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.596113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.596128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.596398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.596413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.596597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.596613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.596838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.596869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.597103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.597132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.597414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.597446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.597740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.597755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.597940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.597955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.598131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.598145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.598358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.598373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.598562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.598577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.598805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.598820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.599131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.599146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.599394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.599409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.599684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.599698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.599865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.599880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.600053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.298 [2024-07-15 20:27:38.600068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.298 qpair failed and we were unable to recover it. 00:29:13.298 [2024-07-15 20:27:38.600235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-07-15 20:27:38.600252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.299 [2024-07-15 20:27:38.600442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.299 [2024-07-15 20:27:38.600456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.299 qpair failed and we were unable to recover it. 00:29:13.587 [2024-07-15 20:27:38.600694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.587 [2024-07-15 20:27:38.600725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.587 qpair failed and we were unable to recover it. 00:29:13.587 [2024-07-15 20:27:38.600994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.587 [2024-07-15 20:27:38.601025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.587 qpair failed and we were unable to recover it. 00:29:13.587 [2024-07-15 20:27:38.601298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.601329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.601530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.601559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.601850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.601880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.602094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.602123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.602397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.602429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.602634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.602649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.602917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.602932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.603107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.603122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.603325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.603340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.603638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.603668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.603930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.603961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.604185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.604215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.604443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.604475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.604773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.604803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.604999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.605014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.605285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.605316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.605516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.605546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.605848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.605878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.606165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.606179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.606439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.606454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.606709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.606724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.606883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.606897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.607168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.607197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.607488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.607519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.607743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.607773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.608073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.608102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.608370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.608415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.608714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.608743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.609005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.609020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.609201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.609215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.609402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.609418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.609616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.609645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.609933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.609962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.610160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.610190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.610484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.610515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.610816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.610846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.611140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.611175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.611379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.611411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.611727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.611742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.611925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.611940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.588 [2024-07-15 20:27:38.612230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.588 [2024-07-15 20:27:38.612245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.588 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.612507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.612522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.612768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.612782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.613013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.613027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.613162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.613176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.613345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.613361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.613570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.613584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.613852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.613867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.614053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.614067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.614345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.614376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.614651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.614681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.614949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.614964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.615128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.615142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.615360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.615391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.615673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.615703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.615905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.615935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.616076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.616091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.616339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.616370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.616595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.616625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.616828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.616857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.617125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.617156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.617442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.617473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.617835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.617850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.618095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.618110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.618229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.618244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.618512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.618527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.618773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.618803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.619125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.619155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.619436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.619468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.619705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.619734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.620005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.620035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.620285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.620300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.620551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.620566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.589 qpair failed and we were unable to recover it. 00:29:13.589 [2024-07-15 20:27:38.620755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.589 [2024-07-15 20:27:38.620770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.621082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.621096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.621402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.621416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.621676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.621694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.621950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.621965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.622154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.622168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.622279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.622295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.622559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.622574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.622811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.622825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.623102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.623116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.623359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.623374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.623637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.623652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.623892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.623906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.624151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.624165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.624433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.624448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.624631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.624646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.624846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.624860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.624999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.625014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.625191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.625205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.625470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.625485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.625762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.625776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.625963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.625977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.626243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.626262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.626553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.626568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.626753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.626767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.627029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.627044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.627155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.627170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.627408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.627423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.627604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.627618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.627801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.627816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.627986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.628000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.628266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.628281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.628494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.628508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.628781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.628796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.629032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.629046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.629265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.629280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.629527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.629541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.590 qpair failed and we were unable to recover it. 00:29:13.590 [2024-07-15 20:27:38.629777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.590 [2024-07-15 20:27:38.629792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.629976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.629991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.630261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.630277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.630578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.630593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.630770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.630784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.630976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.630990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.631267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.631549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.631564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.631752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.631767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.631979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.631994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.632217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.632232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.632502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.632517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.632635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.632649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.632834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.632848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.633134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.633148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.633313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.633329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.633633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.633648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.633774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.633789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.634050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.634064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.634352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.634366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.634568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.634583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.634831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.634845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.635076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.635090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.635356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.635371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.635606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.635620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.635870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.635885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.636003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.636017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.636282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.636297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.636460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.636475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.636690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.636704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.636879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.636893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.637154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.637168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.637380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.637395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.637529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.637557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.637810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.637821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.637979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.637989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.638241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.638251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.638508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.638517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.591 qpair failed and we were unable to recover it. 00:29:13.591 [2024-07-15 20:27:38.638791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.591 [2024-07-15 20:27:38.638800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.639024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.639033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.639260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.639269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.639443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.639452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.639621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.639631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.639907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.639917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.640167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.640176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.640401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.640410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.640578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.640591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.640833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.640862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.641132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.641162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.641451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.641482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.641727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.641756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.642092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.642329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.642339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.642506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.642515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.642782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.642792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.642976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.642985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.643235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.643244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.643498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.643508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.643664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.643674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.643882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.643891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.644046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.644056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.644219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.644228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.644458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.644468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.592 qpair failed and we were unable to recover it. 00:29:13.592 [2024-07-15 20:27:38.644689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.592 [2024-07-15 20:27:38.644698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.644817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.644827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.644944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.644953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.645194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.645203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.645432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.645442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.645719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.645729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.645900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.645910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.646161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.646170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.646419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.646429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.646616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.646625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.646819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.646829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.646934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.646944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.647103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.647112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.647421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.647431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.647705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.647714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.647962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.647972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.648219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.648228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.648455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.648465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.648640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.648649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.648875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.648885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.649134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.649143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.649436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.649446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.649699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.649709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.649965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.649976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.650145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.650154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.650274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.650283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.650612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.650622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.650723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.650731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.650997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.651007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.651240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.651249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.651428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.651438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.651721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.651750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.652016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.652046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.652376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.652408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.652704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.652734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.653037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.653067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.653354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.653364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.653538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.653548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.653713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.653722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.653888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.593 [2024-07-15 20:27:38.653897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.593 qpair failed and we were unable to recover it. 00:29:13.593 [2024-07-15 20:27:38.654062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.654071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.654247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.654260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.654434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.654444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.654613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.654622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.654807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.654816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.655061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.655071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.655192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.655200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.655351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.655361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.655450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.655458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.655742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.655752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.655950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.655960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.656153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.656163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.656414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.656424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.656664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.656674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.656912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.656921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.657156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.657166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.657444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.657454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.657612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.657621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.657780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.657790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.658013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.658023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.658179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.658189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.658417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.658427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.658601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.658610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.658861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.658872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.659098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.659108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.659408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.659418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.659670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.659679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.659784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.659793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.660048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.660057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.660214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.660224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.660474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.660483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.660641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.594 [2024-07-15 20:27:38.660651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-15 20:27:38.660754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.660762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.660945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.660955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.661104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.661113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.661228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.661237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.661442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.661452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.661636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.661646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.661880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.661890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.662086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.662095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.662358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.662368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.662477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.662485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.662638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.662648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.662759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.662768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.663018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.663028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.663259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.663269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.663442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.663451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.663615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.663624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.663897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.663907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.664085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.664095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.664367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.664377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.664599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.664608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.664905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.664914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.665153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.665162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.665394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.665403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.665580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.665590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.665837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.665846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.666110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.666119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.666278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.666288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.666566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.666576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.666822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.666831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.667055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.667064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.667318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.667329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.667495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.667506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.667695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.667704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.667899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.667909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.668179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.668188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.668420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.668429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.668630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.668640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.668872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.668881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.669073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.669334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.669344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-15 20:27:38.669602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.595 [2024-07-15 20:27:38.669611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.669768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.669778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.670053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.670063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.670265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.670274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.670550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.670559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.670851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.670861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.671102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.671112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.671362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.671372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.671626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.671635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.671888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.671898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.672167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.672177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.672408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.672418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.672615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.672624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.672831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.672840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.673140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.673149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.673317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.673326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.673601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.673611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.673867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.673877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.674032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.674043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.674298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.674307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.674483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.674493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.674787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.674796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.674957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.674967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.675168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.675194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.675380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.675390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.675643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.675653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.675878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.675888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.676166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.676176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.676429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.676439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.676736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.676746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.676914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.676924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.677150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.677160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.677336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.677345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.677544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.677554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.677802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.677812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.678051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.678060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.678303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.678313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.678581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.678591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.678836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.678846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.679073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.679082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-15 20:27:38.679359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.596 [2024-07-15 20:27:38.679369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.679617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.679626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.679816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.679826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.680050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.680060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.680235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.680245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.680416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.680428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.680604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.680614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.680870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.680880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.681155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.681165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.681370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.681381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.681660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.681669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.681770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.681779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.682026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.682036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.682285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.682294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.682450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.682459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.682708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.682717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.682957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.682967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.683237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.683247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.683440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.683453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.683636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.683645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.683813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.683823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.684092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.684101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.684331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.684341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.597 [2024-07-15 20:27:38.684525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.597 [2024-07-15 20:27:38.684535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.597 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.684650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.684659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.684903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.684913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.685145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.685154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.685417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.685427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.685582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.685592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.685822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.685832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.685994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.686004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.686231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.686241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.686351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.686362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.686567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.686577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.686801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.686811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.686997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.687006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.687228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.687238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.687467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.687478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.687581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.687592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.687754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.687764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.687921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.687930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.688176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.688186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.688412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.688422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.688699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.688708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.688887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.688896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.689131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.689140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.689314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.689323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.689504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.689514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.689753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.689762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.689887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.689897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.690078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.690087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.690339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.690349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.690576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.690586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.690814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.690824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.690976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.690985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.691155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.691164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.691335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.691344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.691594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.691603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.691767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.691780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.692037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.692046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.692177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.692186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.598 qpair failed and we were unable to recover it. 00:29:13.598 [2024-07-15 20:27:38.692382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.598 [2024-07-15 20:27:38.692393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.692492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.692504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.692722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.692732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.692909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.692918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.693085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.693094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.693326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.693336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.693525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.693535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.693707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.693716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.693896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.693906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.694129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.694138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.694295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.694305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.694406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.694417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.694571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.694582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.694738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.694748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.695023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.695032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.695320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.695330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.695577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.695587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.695845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.695855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.695970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.695979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.696237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.696247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.696514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.696524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.696615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.696623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.696785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.696794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.696917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.696926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.697175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.697185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.697425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.697435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.697746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.697755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.697910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.697920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.698215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.698224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.698451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.698461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.698704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.698714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.698909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.698919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.699139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.699148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.699348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.699358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.699452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.699461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.699616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.699626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.699910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.699920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.700112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.700123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.700356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.700365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.700618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.599 [2024-07-15 20:27:38.700627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.599 qpair failed and we were unable to recover it. 00:29:13.599 [2024-07-15 20:27:38.700725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.700736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.700911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.700922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.701037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.701046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.701293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.701303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.701476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.701486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.701636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.701646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.701872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.701882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.702036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.702045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.702234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.702243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.702525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.702537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.702785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.702794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.703058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.703067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.703308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.703318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.703484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.703493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.703596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.703606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.703862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.703871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.704148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.704158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.704311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.704320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.704475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.704486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.704636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.704645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.704869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.704879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.705120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.705130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.705294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.705304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.705515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.705524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.705663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.705672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.705894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.705903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.706084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.706093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.706266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.706275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.706458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.706467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.706624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.706634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.706854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.706863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.707130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.707139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.707417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.707427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.707583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.707593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.707695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.707705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.707818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.707828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.708030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.708040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.708212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.708226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.708476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.708487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.708685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.708695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.708917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.600 [2024-07-15 20:27:38.708927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.600 qpair failed and we were unable to recover it. 00:29:13.600 [2024-07-15 20:27:38.709143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.709153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.709376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.709387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.709634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.709644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.709741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.709751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.709864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.709873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.710148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.710157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.710310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.710320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.710497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.710506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.710610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.710619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.710868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.710877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.711048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.711057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.711250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.711264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.711558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.711568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.711824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.711834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.712058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.712067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.712229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.712238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.712601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.712610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.712809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.712819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.713003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.713013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.713264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.713273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.713552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.713562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.713698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.713707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.713977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.713987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.714240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.714251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.714423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.714433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.714679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.714688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.714855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.714864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.715139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.715319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.715331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.715511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.715520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.715774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.715783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.716065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.716228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.716237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.716417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.716427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.716649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.716658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.716868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.716877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.717189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.601 [2024-07-15 20:27:38.717200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.601 qpair failed and we were unable to recover it. 00:29:13.601 [2024-07-15 20:27:38.717366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.717376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.717597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.717606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.717781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.717790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.718014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.718023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.718200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.718209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.718432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.718442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.718649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.718658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.718820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.718830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.719084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.719093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.719285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.719295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.719479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.719489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.719687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.719697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.719796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.719805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.719903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.719912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.720140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.720149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.720393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.720403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.720640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.720650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.720813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.720823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.721044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.721053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.721317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.721327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.721445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.721455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.721702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.721711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.721898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.721907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.722006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.722032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.722283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.722294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.722416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.722426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.722659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.722669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.722937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.722946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.723141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.723151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.723314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.723324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.723576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.723586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.723759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.723769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.723992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.724001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.724169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.724181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.724406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.724416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.724644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.724653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.724824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.724833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.602 qpair failed and we were unable to recover it. 00:29:13.602 [2024-07-15 20:27:38.725061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.602 [2024-07-15 20:27:38.725071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.725261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.725271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.725508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.725521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.725604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.725613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.725778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.725787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.726027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.726036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.726221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.726230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.726389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.726399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.726680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.726689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.726964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.726974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.727143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.727151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.727321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.727331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.727560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.727570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.727737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.727746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.728043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.728053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.728305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.728314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.728497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.728506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.728606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.728615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.728825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.728834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.729068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.729077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.729304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.729313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.729482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.729491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.729733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.729742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.729905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.729915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.730082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.730091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.730336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.730346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.730573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.730582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.730779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.730789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.731046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.731295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.731305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.731434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.731443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.731617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.731626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.731792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.731802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.731899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.731907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.732026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.732035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.732147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.732156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.732397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.732407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.732606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.732614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.732832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.732842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.733037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.733047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.733201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.603 [2024-07-15 20:27:38.733210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-07-15 20:27:38.733462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.733471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.733719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.733731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.733922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.733931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.734116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.734125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.734294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.734304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.734501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.734510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.734687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.734696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.734916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.734925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.735102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.735111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.735346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.735357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.735636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.735645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.735813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.735822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.736067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.736076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.736165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.736173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.736323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.736449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.736459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.736720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.736730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.736933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.736942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.737233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.737241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.737430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.737441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.737682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.737691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.737912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.737921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.738133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.738142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.738257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.738266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.738562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.738571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.738844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.738853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.739109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.739118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.739339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.739350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.739527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.739537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.739705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.739714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.739972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.739981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.740149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.740158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.740341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.740350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.740602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.740612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.740850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.740859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.741025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.741034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.741182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.741190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.741368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.741378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.741541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.741550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.741720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.741730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-07-15 20:27:38.741960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.604 [2024-07-15 20:27:38.741989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.742199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.742235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.742444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.742474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.742681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.742710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.743027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.743062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.743269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.743278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.743388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.743398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.743628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.743638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.743808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.743817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.744062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.744071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.744221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.744230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.744434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.744465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.744759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.744789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.745083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.745112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.745275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.745306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.745531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.745561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.745759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.745789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.746104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.746134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.746286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.746296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.746469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.746478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.746681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.746711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.746925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.746955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.747150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.747179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.747460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.747636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.747645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.747827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.747837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.748036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.748045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.748289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.748298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.748522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.748531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.748689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.748699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.748847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.748856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.749025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.749034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.749321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.749352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.749571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.749601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.749998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.750027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.750306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.750336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.750494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.750523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.750738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.750766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.751145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.751175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.751466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.751497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.751662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.751692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-07-15 20:27:38.751958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.605 [2024-07-15 20:27:38.751994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.752192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.752221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.752500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.752533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.752829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.752859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.753177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.753207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.753485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.753516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.753785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.753814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.754115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.754145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.754389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.754419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.754566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.754596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.754802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.754831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.755113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.755142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.755357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.755388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.755661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.755690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.755862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.755893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.756200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.756229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.756504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.756535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.756752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.756782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.757076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.757106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.757371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.757380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.757592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.757622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.757936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.757965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.758246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.758286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.758577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.758586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.758756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.758766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.758980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.758989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.759292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.759324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.759595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.759626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.759976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.760005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.760295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.760327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.760629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.760658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.760977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.761006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.761295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.761326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.761623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.761653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.761890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-07-15 20:27:38.761920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-07-15 20:27:38.762171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.762180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.762425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.762435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.762604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.762613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.762784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.762793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.763070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.763080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.763248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.763262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.763434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.763443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.763640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.763650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.763825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.763834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.764041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.764071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.764299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.764330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.764597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.764626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.764795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.764825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.765090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.765120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.765352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.765361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.765507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.765517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.765677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.765686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.765945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.765954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.766133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.766142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.766363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.766373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.766599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.766628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.766942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.766972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.767181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.767228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.767356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.767366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.767556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.767565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.767731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.767741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.767901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.767910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.768146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.768176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.768437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.768469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.768709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.768739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.768933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.768962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.769166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.769176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.769429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.769439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.769594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.769614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.769808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.769838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.770076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.770105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.770374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.770627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.770656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.770940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.770970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.771237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.771276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-07-15 20:27:38.771471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-07-15 20:27:38.771501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.771657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.771687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.772011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.772040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.772269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.772300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.772583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.772613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.772796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.772835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.773043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.773052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.773200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.773209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.773434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.773444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.773646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.773655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.773874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.773882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.774129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.774138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.774413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.774422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.774534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.774544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.774719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.774729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.774911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.774920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.775104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.775130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.775430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.775462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.775707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.775737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.776058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.776089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.776302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.776333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.776614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.776645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.777017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.777059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.777215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.777224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.777467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.777477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.777727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.777736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.777932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.777941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.778099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.778109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.778343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.778374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.778616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.778646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.778871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.778901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.779191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.779221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.779436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.779446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.779701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.779730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.779957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.779986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.780187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.780216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.780543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.780553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.780750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.780780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.781101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.781131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.781399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.781409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.781531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.608 [2024-07-15 20:27:38.781540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-07-15 20:27:38.781650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.781659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.781847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.781856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.782031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.782040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.782232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.782263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.782639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.782675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.782892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.782921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.783158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.783187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.783434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.783465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.783683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.783712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.783867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.783897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.784121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.784151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.784362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.784393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.784608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.784638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.784793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.784822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.785127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.785157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.785367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.785376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.785541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.785551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.785650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.785659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.785882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.785891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.786091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.786100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.786320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.786330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.786554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.786584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.786866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.786896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.787208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.787239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.787555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.787586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.787752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.787781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.788008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.788038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.788237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.788278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.788573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.788604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.788769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.788799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.789120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.789150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.789385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.789422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.789556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.789585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.789858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.789887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.790176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.790206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.790479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.790489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.790644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.790653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.790912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.790942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.791185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.791215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.791519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.791551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.791712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.609 [2024-07-15 20:27:38.791741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-07-15 20:27:38.791906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.791935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.792202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.792231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.792526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.792558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.792791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.792821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.792984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.793014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.793311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.793343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.793664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.793693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.793992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.794022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.794315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.794325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.794574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.794584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.794689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.794697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.794870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.794880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.795109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.795140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.795434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.795465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.795687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.795716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.796001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.796030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.796193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.796222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.796612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.796681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.796916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.796950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.797244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.797288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.797510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.797540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.797765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.797795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.798110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.798139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.798423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.798454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.798666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.798695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.798840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.798870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.799215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.799244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.799568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.799598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.799814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.799843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.800125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.800154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.800437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.800456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.800696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.800711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.800842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.800856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.801094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.801108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.801368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.801398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.610 qpair failed and we were unable to recover it. 00:29:13.610 [2024-07-15 20:27:38.801730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.610 [2024-07-15 20:27:38.801760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.801921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.801950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.802147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.802176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.802452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.802483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.802750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.802780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.803126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.803156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.803472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.803503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.803710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.803739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.804017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.804047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.804287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.804310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.804577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.804592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.804778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.804791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.805103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.805132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.805409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.805439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.805731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.805760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.806063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.806092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.806299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.806313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.806444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.806458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.806571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.806586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.806766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.806780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.807083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.807112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.807440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.807471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.807771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.807786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.807995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.808024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.808326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.808357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.808503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.808517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.808646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.808660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.808918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.808932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.809221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.809235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.809455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.809469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.809590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.809604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.809766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.809780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.809964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.809978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.810243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.810282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.810488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.810517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.810711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.810745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.810880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.810909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.811138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.811167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.811410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.811442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.811746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.811774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.812125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.812154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.611 qpair failed and we were unable to recover it. 00:29:13.611 [2024-07-15 20:27:38.812451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.611 [2024-07-15 20:27:38.812466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.812668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.812682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.812808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.812822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.813046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.813060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.813197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.813211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.813328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.813343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.813575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.813589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.813820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.813834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.814057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.814072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.814293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.814307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.814525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.814539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.814803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.814817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.815124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.815138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.815357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.815372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.815613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.815626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.815748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.815761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.815992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.816006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.816215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.816248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.816494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.816524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.816744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.816773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.817036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.817065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.817356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.817371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.817605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.817619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.817727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.817741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.817937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.817951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.818156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.818171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.818449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.818464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.818651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.818665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.818872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.818886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.819107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.819137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.819404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.819419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.819654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.819668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.819939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.819955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.820207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.820222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.820483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.820500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.820637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.820652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.820835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.820877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.821090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.821119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.821328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.821359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.821595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.821609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.612 qpair failed and we were unable to recover it. 00:29:13.612 [2024-07-15 20:27:38.821720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.612 [2024-07-15 20:27:38.821734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.821900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.821915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.822105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.822120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.822311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.822326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.822532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.822546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.822725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.822739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.822916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.822930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.823036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.823050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.823321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.823335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.823469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.823483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.823668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.823682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.823886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.823916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.824206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.824235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.824430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.824444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.824617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.824631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.824968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.824997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.825291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.825306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.825499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.825513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.825745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.825759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.825929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.825957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.826245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.826287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.826507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.826521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.826799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.826813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.827005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.827019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.827270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.827284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.827512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.827527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.827725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.827739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.827934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.827965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.828266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.828600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.828631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.828956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.828984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.829304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.829336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.829628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.829658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.830002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.830031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.830230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.830274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.830567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.830581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.830747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.830760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.831020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.831049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.831317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.831348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.831568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.831582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.831822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.831852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.613 qpair failed and we were unable to recover it. 00:29:13.613 [2024-07-15 20:27:38.832090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.613 [2024-07-15 20:27:38.832119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.832408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.832439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.832667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.832697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.832972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.832987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.833278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.833292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.833470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.833484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.833666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.833680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.833964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.833994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.834201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.834231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.834575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.834606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.834835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.834865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.835173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.835203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.835431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.835461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.835730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.835760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.836024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.836053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.836287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.836303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.836430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.836444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.836682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.836712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.836947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.836976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.837129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.837159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.837382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.837397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.837524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.837539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.837784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.837798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.838145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.838160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.838428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.838458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.838671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.838700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.838986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.839016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.839296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.839310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.839572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.839586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.839760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.839774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.839902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.839916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.840155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.840169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.840385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.840400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.840591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.840608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.840811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.614 [2024-07-15 20:27:38.840826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.614 qpair failed and we were unable to recover it. 00:29:13.614 [2024-07-15 20:27:38.841028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.841043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.841268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.841283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.841457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.841471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.841647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.841676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.841946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.841975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.842207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.842223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.842439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.842454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.842577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.842591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.842775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.842789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.843084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.843098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.843300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.843314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.843573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.843587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.843858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.843872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.844117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.844132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.844321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.844336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.844536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.844566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.844858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.844888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.845182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.845211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.845451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.845482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.845723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.845753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.845967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.845996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.846306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.846321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.846484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.846498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.846606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.846620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.846745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.846759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.847076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.847105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.847271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.847302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.847514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.847544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.847825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.847854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.848119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.848148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.848366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.848396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.848551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.848565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.848746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.848760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.849074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.849103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.849415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.849429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.849608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.849623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.849755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.849768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.850024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.850054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.850275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.850310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.850522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.850537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.850817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.615 [2024-07-15 20:27:38.850831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.615 qpair failed and we were unable to recover it. 00:29:13.615 [2024-07-15 20:27:38.851066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.851080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.851368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.851383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.851618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.851632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.851770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.851785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.851915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.851928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.852212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.852226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.852350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.852364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.852599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.852613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.852721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.852735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.852841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.852855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.853040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.853054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.853227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.853241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.853434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.853450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.853618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.853632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.853792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.853807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.854148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.854178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.854489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.854521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.854729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.854743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.855025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.855039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.855201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.855215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.855385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.855400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.855661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.855691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.855912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.855941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.856150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.856181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.856386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.856417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.856566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.856605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.856743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.856757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.856893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.856907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.857208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.857223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.857473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.857488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.857701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.857731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.857948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.857977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.858272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.858302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.858520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.858550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.858723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.858737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.859001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.859016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.859314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.859344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.859614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.859649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.859852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.859882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.860095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.860109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.860376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.616 [2024-07-15 20:27:38.860391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.616 qpair failed and we were unable to recover it. 00:29:13.616 [2024-07-15 20:27:38.860573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.860588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.860774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.860803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.861044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.861073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.861371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.861417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.861627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.861641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.861848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.861862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.862129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.862144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.862333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.862348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.862549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.862578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.862730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.862759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.862939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.862969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.863118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.863147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.863349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.863380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.863671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.863685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.863863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.863877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.864053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.864083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.864401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.864432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.864604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.864633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.864851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.864881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.865085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.865115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.865359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.865373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.865635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.865649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.865889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.865903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.866173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.866188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.866457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.866472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.866679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.866693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.866856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.866870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.867183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.867197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.867374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.867389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.867616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.867645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.867857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.867886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.868120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.868149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.868369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.868400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.868665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.868679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.869012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.869026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.869233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.869247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.869503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.869520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.869650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.869663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.869848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.869862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.869975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.869990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.870295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.617 [2024-07-15 20:27:38.870310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.617 qpair failed and we were unable to recover it. 00:29:13.617 [2024-07-15 20:27:38.870573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.870587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.870814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.870828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.871092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.871106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.871269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.871284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.871481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.871495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.871714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.871728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.871839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.871853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.872130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.872144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.872405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.872421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.872597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.872612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.872790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.872804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.873069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.873083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.873322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.873337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.873503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.873517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.873703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.873717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.873881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.873896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.874076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.874090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.874298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.874313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.874532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.874547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.874784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.874798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.875060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.875074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.875326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.875340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.875526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.875541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.875728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.875742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.875860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.875875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.875972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.875987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.876247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.876269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.876511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.876525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.876720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.876734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.877027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.877041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.877223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.877237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.877505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.877520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.877721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.877735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.877917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.877931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.878196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.878211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.878445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.878463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.878712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.878726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.878924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.878939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.879157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.879171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.879439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.879454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.618 [2024-07-15 20:27:38.879732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.618 [2024-07-15 20:27:38.879746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.618 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.880018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.880033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.880323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.880337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.880514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.880529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.880718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.880732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.881011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.881026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.881227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.881241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.881560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.881575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.881702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.881717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.881849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.881864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.882041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.882055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.882299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.882314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.882576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.882591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.882805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.882819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.883096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.883111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.883324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.883340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.883617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.883631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.883725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.883738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.883964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.883978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.884239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.884253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.884474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.884489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.884782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.884796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.885050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.885082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.885374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.885386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.885585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.885595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.885825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.885834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.885993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.886003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.886280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.886290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.886416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.886426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.886645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.886655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.886854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.886864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.886974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.886984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.887199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.887209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.887492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.887501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.887748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.887758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.619 qpair failed and we were unable to recover it. 00:29:13.619 [2024-07-15 20:27:38.887930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.619 [2024-07-15 20:27:38.887943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.888196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.888205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.888454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.888463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.888582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.888592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.888774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.888783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.888971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.888982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.889184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.889194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.889340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.889350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.889555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.889564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.889793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.889803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.889925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.889934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.890237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.890247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.890363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.890373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.890486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.890495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.890761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.890771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.890938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.890948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.891227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.891238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.891432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.891441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.891736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.891745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.891947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.891957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.892155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.892164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.892338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.892348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.892574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.892584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.892809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.892819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.893046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.893057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.893313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.893323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.893427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.893437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.893680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.893689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.893866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.893875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.894122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.894131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.894308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.894318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.894479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.894488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.894666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.894675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.894849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.894860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.895025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.895035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.895292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.895302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.895510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.895520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.895691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.895701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.895818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.895828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.895983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.895992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.896184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.620 [2024-07-15 20:27:38.896195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.620 qpair failed and we were unable to recover it. 00:29:13.620 [2024-07-15 20:27:38.896470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.896480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.896638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.896647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.896808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.896818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.897024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.897033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.897288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.897299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.897413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.897423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.897591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.897601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.897798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.897807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.898002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.898012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.898259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.898271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.898455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.898465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.898657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.898666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.898764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.898773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.898881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.898891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.899109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.899119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.899299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.899311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.899507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.899517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.899706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.899715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.899908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.899917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.900123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.900132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.900316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.900326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.900506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.900516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.900684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.900693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.900858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.900868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.901136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.901145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.901380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.901390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.901597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.901607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.901784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.901795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.902069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.902079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.902257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.902268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.902584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.902593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.902821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.902831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.903062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.903073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.903293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.903303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.903595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.903604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.903767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.903777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.903952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.903962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.904197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.904206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.904456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.904465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.904697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.904709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.904887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.621 [2024-07-15 20:27:38.904896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.621 qpair failed and we were unable to recover it. 00:29:13.621 [2024-07-15 20:27:38.905099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.905108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.905271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.905282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.905453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.905464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.905703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.905713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.906000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.906010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.906201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.906210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.906422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.906432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.906714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.906723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.906918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.906928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.907104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.907113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.907343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.907354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.907529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.907539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.622 [2024-07-15 20:27:38.907803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.622 [2024-07-15 20:27:38.907813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.622 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.908055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.908067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.908236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.908247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.908483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.908492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.908719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.908729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.908971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.908981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.909263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.909273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.909381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.909394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.909625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.909635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.909755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.909765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.910012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.910021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.910263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.910517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.910526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.910788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.910798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.910909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.910918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.911178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.911188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.911348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.911359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.911595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.911604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.911804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.911813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.912128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.912137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.912404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.912413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.912647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.912656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.912940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.912949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.913188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.913197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.913375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.913385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.913549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.913558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.913680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.913690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.913876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.913885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.914045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.914054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.914318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.914329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.914538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.914547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.914744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.914753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.913 [2024-07-15 20:27:38.914862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.913 [2024-07-15 20:27:38.914871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.913 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.915132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.915141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.915424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.915434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.915683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.915694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.915888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.915898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.916136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.916146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.916323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.916332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.916512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.916521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.916714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.916724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.916853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.916863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.917042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.917052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.917237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.917247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.917434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.917443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.917619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.917629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.917726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.917738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.917917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.917926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.918132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.918141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.918426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.918436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.918600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.918610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.918784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.918793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.919085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.919095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.919322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.919334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.919559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.919569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.919738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.919748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.919843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.919851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.920021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.920031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.920206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.920215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.920524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.920533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.920639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.920649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.920765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.920774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.920926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.920935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.921203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.921211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.921436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.921445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.921630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.921655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.921916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.921926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.922204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.922214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.922430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.922441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.922625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.922635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.922842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.922851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.923085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.923094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.914 [2024-07-15 20:27:38.923268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.914 [2024-07-15 20:27:38.923278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.914 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.923469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.923478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.923741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.923750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.924018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.924028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.924293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.924303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.924496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.924506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.924728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.924738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.925020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.925029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.925279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.925289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.925414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.925423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.925673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.925683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.925829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.925838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.926093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.926102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.926202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.926210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.926407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.926416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.926590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.926599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.926774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.926783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.926960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.926969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.927207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.927216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.927432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.927442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.927624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.927634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.927845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.927857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.928079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.928089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.928323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.928333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.928561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.928570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.928679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.928689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.928813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.928822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.929110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.929119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.929224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.929234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.929484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.929494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.929620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.929629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.929788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.929797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.930032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.930042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.930215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.930225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.930431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.930441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.930541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.930550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.930745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.930754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.930987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.930997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.931158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.931168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.931282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.931292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.931521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.931530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.915 [2024-07-15 20:27:38.931612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.915 [2024-07-15 20:27:38.931620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.915 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.931886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.931896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.932142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.932151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.932412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.932422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.932646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.932655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.932840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.932849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.933023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.933032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.933311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.933321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.933476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.933485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.933732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.933742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.933914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.933924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.934174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.934184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.934428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.934438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.934526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.934535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.934620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.934628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.934851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.934860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.935049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.935058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.935304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.935314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.935486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.935495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.935665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.935675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.935846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.935857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.936082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.936091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.936201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.936210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.936371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.936381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.936557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.936566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.936786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.936795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.937040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.937049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.937328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.937338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.937559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.937569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.937739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.937748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.938030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.938040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.938132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.938141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.938302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.938311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.938521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.938531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.938779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.938788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.938883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.938892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.939060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.939069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.939221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.939230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.939395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.939405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.939628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.939637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.939820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.939830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.916 qpair failed and we were unable to recover it. 00:29:13.916 [2024-07-15 20:27:38.940068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.916 [2024-07-15 20:27:38.940077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.940226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.940235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.940421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.940431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.940545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.940554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.940633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.940642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.940920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.940929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.941233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.941242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.941484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.941495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.941743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.941752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.941862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.941871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.942048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.942057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.942225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.942234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.942385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.942395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.942482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.942491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.942712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.942721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.942901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.942910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.943078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.943087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.943335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.943346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.943501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.943510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.943758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.943769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.943892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.943901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.944147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.944156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.944365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.944375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.944555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.944565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.944815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.944844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.945043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.945072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.945272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.945303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.945493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.945502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.945692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.945722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.946030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.946060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.946281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.946312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.946483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.946513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.946723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.917 [2024-07-15 20:27:38.946731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.917 qpair failed and we were unable to recover it. 00:29:13.917 [2024-07-15 20:27:38.946921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.946930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.947103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.947112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.947300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.947331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.947577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.947607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.947879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.947908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.948174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.948203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.948361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.948370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.948572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.948602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.948849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.948878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.949166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.949194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.949499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.949530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.949754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.949763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.949915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.949924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.950094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.950104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.950257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.950268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.950436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.950445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.950614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.950623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.950787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.950795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.950959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.950969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.951216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.951245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.951575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.951606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.951820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.951829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.952048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.952057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.952236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.952245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.952524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.952533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.952785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.952814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.953114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.953149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.953398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.953428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.953605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.953635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.953903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.953933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.954251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.954270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.954554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.954564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.954795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.954805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.955011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.955020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.955120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.955131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.955324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.955334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.955504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.955514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.955755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.955786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.956010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.956041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.956316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.956345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.918 [2024-07-15 20:27:38.956564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.918 [2024-07-15 20:27:38.956595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.918 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.956914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.956943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.957219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.957249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.957520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.957550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.957816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.957846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.958120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.958149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.958411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.958442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.958688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.958717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.958942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.958951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.959199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.959208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.959412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.959421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.959665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.959695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.959975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.960005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.960244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.960282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.960499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.960529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.960742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.960772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.960989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.960999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.961177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.961187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.961287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.961296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.961484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.961493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.961663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.961673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.961908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.961918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.962164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.962173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.962365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.962375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.962487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.962496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.962724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.962734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.962884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.962896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.963030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.963039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.963191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.963201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.963520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.963530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.963800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.963810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.964059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.964068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.964288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.964298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.964495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.964504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.964749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.964778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.965049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.965079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.965241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.965278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.965546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.965555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.965812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.965821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.966082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.966091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.966196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.966206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.919 [2024-07-15 20:27:38.966390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.919 [2024-07-15 20:27:38.966400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.919 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.966553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.966562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.966735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.966745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.966977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.966986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.967227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.967268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.967538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.967567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.967898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.967927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.968177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.968207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.968502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.968533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.968828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.968857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.969124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.969154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.969467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.969498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.969779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.969809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.970065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.970074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.970305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.970314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.970538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.970547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.970828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.970858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.971106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.971136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.971335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.971366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.971576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.971605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.971893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.971922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.972187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.972216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.972469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.972500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.972724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.972754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.972955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.972965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.973156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.973168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.973376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.973385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.973587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.973596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.973833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.973862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.974156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.974185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.974409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.974440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.974607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.974616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.974868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.974898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.975213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.975243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.975529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.975559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.975869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.975898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.976128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.976157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.976459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.976490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.976771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.976781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.976974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.976983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.977207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.977216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.920 qpair failed and we were unable to recover it. 00:29:13.920 [2024-07-15 20:27:38.977466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.920 [2024-07-15 20:27:38.977475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.977578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.977587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.977839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.977848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.977959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.977968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.978217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.978227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.978422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.978431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.978594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.978604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.978832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.978861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.979001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.979031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.979325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.979357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.979515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.979524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.979644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.979653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.979771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.979780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.980105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.980114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.980225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.980233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.980338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.980348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.980597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.980606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.980843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.980852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.981022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.981033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.981205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.981215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.981437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.981447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.981602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.981611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.981767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.981776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.982096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.982126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.982335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.982371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.982599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.982628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.982908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.982939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.983188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.983217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.983405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.983446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.983667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.983676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.983859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.983868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.984047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.984077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.984293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.984324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.984529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.984560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.984825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.984835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.921 [2024-07-15 20:27:38.985133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.921 [2024-07-15 20:27:38.985142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.921 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.985368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.985377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.985576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.985585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.985796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.985826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.986050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.986080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.986274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.986305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.986587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.986617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.986811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.986840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.987031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.987061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.987378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.987415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.987528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.987537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.987716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.987725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.987917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.987946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.988209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.988238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.988552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.988582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.988870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.988899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.989199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.989230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.989457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.989487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.989699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.989729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.989936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.989945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.990047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.990057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.990154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.990163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.990422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.990431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.990541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.990551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.990802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.990811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.991083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.991093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.991336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.991345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.991596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.991606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.991784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.991793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.991985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.992019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.992317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.992348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.992637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.992667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.992828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.992858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.993163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.993194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.993421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.993453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.993679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.993709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.994014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.994024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.994242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.994250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.994427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.994436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.994518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.994526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.994683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.994691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.922 [2024-07-15 20:27:38.994970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.922 [2024-07-15 20:27:38.995000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.922 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.995316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.995347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.995571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.995601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.995824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.995853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.996030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.996059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.996360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.996391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.996612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.996641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.996863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.996872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.997023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.997032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.997239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.997278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.997491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.997521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.997812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.997843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.998077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.998107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.998306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.998336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.998480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.998490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.998767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.998837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.999154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.999187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.999406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.999438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:38.999735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:38.999765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.000064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.000079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.000267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.000282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.000487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.000502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.000709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.000739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.000978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.001008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.001315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.001347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.001555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.001584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.001751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.001781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.002098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.002113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.002290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.002310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.002441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.002456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.002744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.002773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.003005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.003034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.003299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.003331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.003485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.003514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.003744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.003782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.003906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.003921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.004096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.004110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.004346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.004361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.004523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.004536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.004657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.004668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.004767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.004776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.923 [2024-07-15 20:27:39.005065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.923 [2024-07-15 20:27:39.005074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.923 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.005239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.005249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.005426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.005435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.005610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.005638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.005914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.005943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.006153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.006182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.006465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.006497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.006635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.006664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.006902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.006911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.007106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.007116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.007398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.007408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.007623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.007632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.007823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.007832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.008116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.008125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.008400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.008410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.008558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.008567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.008736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.008766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.009029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.009060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.009262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.009293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.009585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.009615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.009753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.009762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.009973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.010002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.010241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.010279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.010568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.010598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.010759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.010789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.011086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.011115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.011408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.011439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.011608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.011619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.011800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.011829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.012119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.012149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.012451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.012482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.012649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.012678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.012953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.012982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.013248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.013289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.013447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.013477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.013612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.013642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.013856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.013885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.014180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.014189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.014411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.014421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.014522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.014530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.014679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.014688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.924 [2024-07-15 20:27:39.014935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.924 [2024-07-15 20:27:39.014944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.924 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.015133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.015143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.015368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.015378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.015551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.015560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.015785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.015815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.016059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.016089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.016409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.016446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.016544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.016553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.016829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.016838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.017008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.017018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.017218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.017247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.017474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.017505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.017766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.017796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.018129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.018414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.018444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.018655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.018665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.018782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.018791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.019068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.019097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.019436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.019468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.019631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.019661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.019825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.019855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.020153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.020182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.020338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.020368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.020600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.020629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.020778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.020808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.021030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.021060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.021322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.021334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.021442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.021451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.021671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.021681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.021876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.021885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.022053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.022062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.022223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.022242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.022512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.022788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.022817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.023150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.023180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.023407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.023438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.023613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.023644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.023890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.023919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.024122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.024131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.024329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.925 [2024-07-15 20:27:39.024339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.925 qpair failed and we were unable to recover it. 00:29:13.925 [2024-07-15 20:27:39.024550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.024580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.024728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.024758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.024983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.025013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.025329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.025360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.025509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.025538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.025691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.025720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.026011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.026041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.026245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.026282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.026563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.026593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.026852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.026861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.027079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.027089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.027202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.027211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.027406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.027416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.027606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.027615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.027734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.027743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.027914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.027923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.028035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.028044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.028215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.028225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.028407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.028437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.028651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.028680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.029026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.029056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.029343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.029373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.029577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.029607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.029875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.029905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.030104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.030113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.030367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.030398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.030609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.030618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.030783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.030792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.031092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.031122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.031272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.031303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.031501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.031531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.031747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.031776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.032112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.032141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.032466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.032497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.032763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.032793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.033100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.033147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.926 qpair failed and we were unable to recover it. 00:29:13.926 [2024-07-15 20:27:39.033313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.926 [2024-07-15 20:27:39.033344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.033545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.033575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.033733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.033743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.034021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.034051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.034298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.034329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.034547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.034577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.034730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.034739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.034991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.035021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.035229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.035265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.035510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.035540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.035745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.035775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.036110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.036425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.036457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.036723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.036732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.036852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.036861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.037057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.037067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.037235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.037244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.037512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.037550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.037765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.037795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.038086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.038116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.038416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.038447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.038698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.038728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.038994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.039024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.039309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.039320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.039492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.039501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.039779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.039789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.040063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.040072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.040247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.040259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.040461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.040491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.040710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.040740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.041017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.041047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.041339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.041370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.041641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.041671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.041868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.041897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.042122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.042131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.042268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.042278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.042441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.042450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.042621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.042630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.042739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.042748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.043041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.043062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.043243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.927 [2024-07-15 20:27:39.043252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.927 qpair failed and we were unable to recover it. 00:29:13.927 [2024-07-15 20:27:39.043508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.043518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.043672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.043682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.044008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.044037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.044217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.044247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.044553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.044584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.044801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.044830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.045070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.045100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.045237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.045276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.045494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.045524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.045819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.045849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.046078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.046107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.046427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.046458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.046736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.046766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.046992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.047002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.047196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.047205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.047469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.047479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.047645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.047656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.047763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.047773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.048018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.048028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.048283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.048292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.048496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.048505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.048679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.048688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.048931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.048961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.049277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.049308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.049527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.049556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.049731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.049760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.050052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.050061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.050227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.050237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.050390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.050400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.050564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.050573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.050802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.050812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.050921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.050929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.051165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.051175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.051332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.051341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.051567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.051576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.051740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.051750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.051857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.051867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.052064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.052074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.052323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.052332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.052522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.052551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.052765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.928 [2024-07-15 20:27:39.052795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.928 qpair failed and we were unable to recover it. 00:29:13.928 [2024-07-15 20:27:39.053053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.053084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.053305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.053336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.053557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.053587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.053744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.053754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.054025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.054055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.054318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.054349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.054672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.054702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.055020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.055050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.055323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.055354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.055564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.055593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.055791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.055820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.056078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.056087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.056263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.056273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.056523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.056553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.056837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.056868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.057105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.057116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.057371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.057381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.057497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.057507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.057703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.057712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.057938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.057948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.058144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.058153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.058426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.058436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.058677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.058686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.058859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.058867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.059144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.059155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.059399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.059410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.059510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.059519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.059790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.059820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.060166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.060195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.060436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.060467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.060663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.060672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.060845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.060855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.061092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.061123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.061433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.061465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.061677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.061715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.061987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.061996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.062161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.062170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.062325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.062353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.062649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.062678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.062891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.062921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.063217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.929 [2024-07-15 20:27:39.063247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.929 qpair failed and we were unable to recover it. 00:29:13.929 [2024-07-15 20:27:39.063416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.063447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.063679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.063709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.063856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.063866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.064111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.064140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.064378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.064409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.064704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.064735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.065029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.065038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.065330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.065340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.065590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.065600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.065698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.065707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.065816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.065825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.066008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.066017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.066277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.066308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.066526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.066555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.066717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.066752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.067028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.067058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.067257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.067267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.067460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.067490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.067634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.067665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.067881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.067911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.068208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.068217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.068456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.068466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.068690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.068700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.068878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.068887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.069142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.069172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.069336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.069367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.069579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.069608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.069875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.069904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.070141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.070150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.070413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.070422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.070597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.070607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.070718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.070727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.070964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.070974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.071151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.071161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.071407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.930 [2024-07-15 20:27:39.071418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.930 qpair failed and we were unable to recover it. 00:29:13.930 [2024-07-15 20:27:39.071511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.071519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.071693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.071701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.071873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.071882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.072115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.072124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.072369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.072379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.072496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.072505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.072704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.072713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.072897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.072906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.073083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.073092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.073252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.073266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.073421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.073430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.073598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.073607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.073769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.073778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.074030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.074039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.074263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.074273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.074438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.074447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.074565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.074574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.074736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.074746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.074995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.075004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.075285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.075296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.075400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.075409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.075599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.075609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.075704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.075712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.075987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.075996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.076176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.076186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.076364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.076374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.076622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.076631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.076898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.076908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.077060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.077070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.077327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.077337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.077493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.077503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.077760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.077769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.077883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.077892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.078058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.078067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.078299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.078309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.078415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.078424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.078600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.078609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.078830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.078839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.079069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.079079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.079289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.079299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.931 [2024-07-15 20:27:39.079544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.931 [2024-07-15 20:27:39.079554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.931 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.079723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.079732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.080011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.080020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.080259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.080269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.080504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.080514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.080705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.080714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.080885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.080895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.081158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.081167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.081274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.081283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.081454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.081463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.081587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.081596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.081790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.081799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.082048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.082057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.082291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.082300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.082392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.082401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.082602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.082612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.082889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.082898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.083162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.083171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.083440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.083450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.083612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.083623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.083813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.083823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.084094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.084103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.084387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.084398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.084572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.084581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.084752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.084761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.084979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.084988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.085237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.085246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.085444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.085453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.085699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.085709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.085815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.085825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.085920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.085929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.086177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.086186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.086455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.086487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.086836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.086867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.087126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.087135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.087384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.087393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.087587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.087596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.087868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.087878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.088158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.088166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.088408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.088418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.088583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.088592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.932 [2024-07-15 20:27:39.088748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.932 [2024-07-15 20:27:39.088757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.932 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.088980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.088989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.089145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.089153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.089386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.089395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.089570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.089579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.089827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.089837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.090083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.090113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.090385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.090416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.090629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.090658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.090929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.090971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.091219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.091229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.091342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.091352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.091580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.091589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.091789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.091798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.092109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.092118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.092354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.092365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.092533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.092542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.092770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.092799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.093107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.093142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.093428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.093459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.093679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.093708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.094000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.094010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.094172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.094182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.094410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.094420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.094650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.094679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.094946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.094975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.095241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.095281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.095514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.095543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.095903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.095933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.096197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.096227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.096510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.096541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.096853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.096883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.097193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.097223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.097559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.097590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.097814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.097823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.098070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.098079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.098279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.098289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.098487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.098495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.098736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.098765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.098970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.098999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.099189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.099219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.933 qpair failed and we were unable to recover it. 00:29:13.933 [2024-07-15 20:27:39.099395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.933 [2024-07-15 20:27:39.099426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.099720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.099750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.100025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.100035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.100183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.100192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.100416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.100426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.100613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.100643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.100805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.100834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.101138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.101167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.101467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.101499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.101766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.101795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.102073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.102102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.102436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.102468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.102794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.102824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.103034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.103075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.103344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.103353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.103561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.103570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.103791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.103801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.104041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.104052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.104147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.104155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.104304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.104314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.104478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.104487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.104747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.104756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.105012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.105021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.105239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.105248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.105521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.105531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.105682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.105692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.105861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.105870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.106098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.106127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.106409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.106441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.106672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.106681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.106853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.106862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.107093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.107102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.107327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.107337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.107579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.107587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.107811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.107820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.934 [2024-07-15 20:27:39.108133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.934 [2024-07-15 20:27:39.108143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.934 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.108310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.108319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.108568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.108577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.108787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.108816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.109083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.109113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.109435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.109467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.109683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.109713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.109928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.109938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.110175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.110205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.110447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.110478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.110676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.110705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.111016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.111025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.111175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.111183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.111338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.111348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.111500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.111509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.111619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.111627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.111877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.111886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.112150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.112159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.112447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.112456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.112606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.112616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.112786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.112795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.113059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.113088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.113355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.113391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.113606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.113637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.113842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.113872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.114165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.114195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.114512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.114521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.114634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.114643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.114806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.114815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.114979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.114988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.115078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.115087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.115237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.115246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.115510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.115539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.115734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.115764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.116081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.116090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.116240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.116250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.116404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.116414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.116536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.116545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.116716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.116725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.116893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.116902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.117016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.117026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.935 [2024-07-15 20:27:39.117205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.935 [2024-07-15 20:27:39.117214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.935 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.117419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.117429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.117689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.117719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.117989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.118018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.118318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.118327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.118566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.118575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.118764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.118773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.118953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.118961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.119227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.119391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.119401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.119600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.119609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.119777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.119785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.119951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.119960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.120181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.120190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.120453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.120463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.120633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.120641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.120757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.120766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.121009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.121018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.121323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.121334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.121428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.121437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.121663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.121911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.121923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.122023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.122031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.122192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.122201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.122353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.122362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.122667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.122676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.122853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.122862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.123101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.123110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.123333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.123343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.123436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.123444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.123616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.123625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.123743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.123752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.123995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.124004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.124176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.124185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.124432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.124442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.124541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.124550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.124704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.124713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.124965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.124974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.125192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.125201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.125362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.125373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.125596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.125605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.936 [2024-07-15 20:27:39.125775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.936 [2024-07-15 20:27:39.125784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.936 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.125979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.125996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.126264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.126273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.126464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.126472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.126717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.126726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.126985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.126993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.127233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.127242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.127570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.127580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.127696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.127705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.127946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.127955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.128189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.128198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.128360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.128369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.128537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.128546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.128698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.128707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.128909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.128918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.129166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.129175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.129464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.129473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.129705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.129714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.129911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.129920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.130171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.130180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.130373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.130384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.130631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.130640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.130807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.130816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.131133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.131142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.131400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.131409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.131519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.131529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.131691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.131700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.131998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.132007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.132171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.132181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.132421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.132431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.132597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.132606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.132707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.132716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.132910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.132918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.133113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.133122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.133345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.133354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.133577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.133587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.133877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.133887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.134090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.134098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.134288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.134298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.134543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.134551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.134718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.134728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.937 [2024-07-15 20:27:39.134986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.937 [2024-07-15 20:27:39.134995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.937 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.135183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.135192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.135347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.135357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.135570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.135580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.135773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.135782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.135984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.135993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.136221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.136230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.136381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.136390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.136504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.136513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.136702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.136711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.136982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.136991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.137212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.137220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.137397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.137407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.137635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.137644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.137867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.137876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.138145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.138154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.138403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.138412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.138643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.138653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.138852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.138861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.139095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.139106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.139263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.139273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.139544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.139554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.139791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.139800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.139971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.139980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.140153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.140162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.140327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.140337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.140462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.140470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.140673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.140682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.140870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.140879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.141150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.141159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.141389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.141399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.141693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.141703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.141871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.141880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.142134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.142143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.142362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.142372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.142522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.938 [2024-07-15 20:27:39.142531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.938 qpair failed and we were unable to recover it. 00:29:13.938 [2024-07-15 20:27:39.142788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.142797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.142918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.142927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.143159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.143168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.143315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.143324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.143437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.143446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.143542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.143550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.143653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.143661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.143882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.143891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.144144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.144153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.144436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.144446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.144611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.144620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.144784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.144793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.145068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.145077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.145240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.145250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.145472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.145482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.145702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.145711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.145821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.145830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.145924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.145932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.146120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.146129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.146334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.146343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.146460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.146469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.146631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.146640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.146878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.146887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.147133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.147142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.147421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.147430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.147650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.147660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.147839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.147849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.148129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.148138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.148358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.148368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.148550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.148559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.148744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.148753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.148960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.148969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.149217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.149226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.149402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.149411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.149524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.149533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.149681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.149690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.149894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.149903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.150141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.150150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.150268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.150278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.150439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.150448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.150611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.939 [2024-07-15 20:27:39.150620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.939 qpair failed and we were unable to recover it. 00:29:13.939 [2024-07-15 20:27:39.150775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.150783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.151040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.151049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.151297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.151306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.151500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.151509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.151694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.151703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.151963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.151972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.152191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.152200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.152503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.152699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.152708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.152868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.152878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.153151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.153160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.153402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.153412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.153630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.153639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.153808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.153817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.154101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.154388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.154397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.154547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.154556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.154658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.154667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.154770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.154779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.155011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.155020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.155261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.155270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.155429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.155438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.155608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.155617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.155801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.155810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.155996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.156005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.156223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.156232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.156331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.156339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.156595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.156603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.156794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.156803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.157060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.157069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.157232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.157241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.157461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.157470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.157621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.157630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.157883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.157892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.158115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.158124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.158308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.158317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.158488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.158497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.158649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.158658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.158755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.158764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.158876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.158885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.159168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.940 [2024-07-15 20:27:39.159177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.940 qpair failed and we were unable to recover it. 00:29:13.940 [2024-07-15 20:27:39.159343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.159352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.159489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.159498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.159773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.159782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.159887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.159896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.160144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.160153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.160301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.160310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.160502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.160511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.160696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.160704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.160976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.160987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.161181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.161190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.161418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.161429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.161782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.161791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.162105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.162114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.162360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.162370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.162539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.162548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.162740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.162749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.162863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.163074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.163083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.163273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.163282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.163483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.163492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.163645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.163654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.163958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.163968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.164134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.164143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.164387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.164397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.164520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.164529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.164614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.164622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.164773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.164782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.164974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.164983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.165163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.165172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.165440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.165449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.165620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.165628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.165791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.165801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.166043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.166052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.166267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.166277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.166474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.166483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.166608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.166617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.166780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.166789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.167050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.167059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.167167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.167176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.167427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.167437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.167684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.941 [2024-07-15 20:27:39.167694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.941 qpair failed and we were unable to recover it. 00:29:13.941 [2024-07-15 20:27:39.167859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.167868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.168063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.168072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.168326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.168335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.168557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.168566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.168681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.168690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.168841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.168850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.169098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.169107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.169211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.169222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.169469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.169480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.169663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.169672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.169843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.169852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.170042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.170052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.170221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.170230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.170400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.170410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.170672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.170681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.170917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.170926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.171198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.171207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.171448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.171457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.171678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.171687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.171921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.171931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.172196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.172205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.172471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.172481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.172643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.172652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.172750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.172759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.172965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.172974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.173165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.173174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.173340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.173349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.173515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.173524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.173770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.173780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.173968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.173977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.174274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.174284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.174540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.174549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.174713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.174722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.174834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.174843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.175031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.175040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.175305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.175314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.175485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.175494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.175714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.175725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.175828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.175837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.176086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.176095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.942 [2024-07-15 20:27:39.176373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.942 [2024-07-15 20:27:39.176382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.942 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.176568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.176577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.176798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.177035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.177044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.177262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.177271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.177404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.177414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.177682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.177691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.177790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.177801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.177999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.178009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.178263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.178404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.178414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.178529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.178538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.178783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.178792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.178957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.178966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.179127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.179136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.179362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.179371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.179465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.179473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.179612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.179622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.179737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.179746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.180027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.180037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.180270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.180279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.180502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.180511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.180687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.180697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.180967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.180976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.181158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.181167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.181339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.181349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.181547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.181556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.181725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.181733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.181885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.181894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.182150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.182160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.182325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.182334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.182449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.182458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.182634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.182643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.182761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.182770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.183003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.183012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.943 [2024-07-15 20:27:39.183230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.943 [2024-07-15 20:27:39.183239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.943 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.183390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.183400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.183622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.183632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.183788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.183798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.183993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.184002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.184153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.184161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.184326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.184335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.184619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.184628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.184895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.184904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.185083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.185092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.185313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.185323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.185479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.185488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.185661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.185672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.185921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.185930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.186119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.186128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.186369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.186379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.186571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.186580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.186789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.186798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.186910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.186920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.187022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.187031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.187203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.187213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.187330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.187339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.187489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.187498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.187765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.187775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.187973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.187982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.188226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.188235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.188423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.188433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.188672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.188681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.188840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.188850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.188959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.188969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.189213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.189222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.189376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.189385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.189605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.189615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.189791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.189801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.190029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.190038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.190228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.190238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.190589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.190598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.190832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.190841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.191063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.191072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.191294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.191304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.191548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.191557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.944 qpair failed and we were unable to recover it. 00:29:13.944 [2024-07-15 20:27:39.191795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.944 [2024-07-15 20:27:39.191804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.191994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.192003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.192179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.192188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.192483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.192492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.192608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.192617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.192711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.192719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.192885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.192893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.193115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.193123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.193344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.193354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.193569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.193579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.193681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.193690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.193923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.193934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.194152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.194160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.194361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.194370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.194531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.194541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.194757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.194766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.195047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.195056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.195251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.195264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.195491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.195500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.195673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.195682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.195969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.195978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.196245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.196257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.196364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.196373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.196494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.196503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.196667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.196676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.196779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.196789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.196946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.196955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.197150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.197159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.197400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.197410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.197579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.197589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.197764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.197773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.197945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.197954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.198117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.198126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.198275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.198285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.198440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.198449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.198615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.198624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.198844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.198853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.199054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.199062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.199308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.199317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.199499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.199508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.199675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.945 [2024-07-15 20:27:39.199684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.945 qpair failed and we were unable to recover it. 00:29:13.945 [2024-07-15 20:27:39.199874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.199883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.200127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.200136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.200296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.200305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.200406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.200414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.200584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.200593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.200786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.200795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.201006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.201015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.201160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.201170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.201429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.201439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.201673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.201682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.201873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.201883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.202189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.202197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.202379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.202389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.202507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.202515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.202685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.202694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.202789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.202797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.203017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.203026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.203180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.203189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.203342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.203351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.203523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.203531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.203696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.203705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.203988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.203997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.204266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.204275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.204496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.204504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.204679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.204688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.204914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.204923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.205113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.205121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.205363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.205372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.205567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.205575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.205740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.205748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.206020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.206029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.206219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.206228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.206420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.206429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.206626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.206635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.206826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.206835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.207070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.207079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.207323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.207332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.207516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.207525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.207838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.207847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.208001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.208010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.208279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.946 [2024-07-15 20:27:39.208288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.946 qpair failed and we were unable to recover it. 00:29:13.946 [2024-07-15 20:27:39.208428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.208437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.208682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.208691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.208942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.208950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.209112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.209121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.209334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.209343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.209491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.209500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.209688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.209697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.209973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.209981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.210172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.210181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.210426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.210438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.210681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.210690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.210802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.210812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.211065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.211074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.211323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.211332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.211528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.211537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.211705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.211714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.211966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.211975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.212237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.212245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.212462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.212471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.212622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.212631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.212783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.212792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.213046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.213055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.213322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.213331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.213450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.213459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.213706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.213715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.213936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.213945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.214115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.214124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.214390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.214400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.214587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.214596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.214791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.214800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.215029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.215037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.215237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.215245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.215429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.215438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.215603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.215612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.215887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.215896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.947 qpair failed and we were unable to recover it. 00:29:13.947 [2024-07-15 20:27:39.216142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.947 [2024-07-15 20:27:39.216150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.216324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.216334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.216576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.216584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.216738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.216747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.216916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.216925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.217093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.217101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.217352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.217362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.217579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.217588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.217860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.217869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.218042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.218052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.218231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.218240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.218434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.218443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.218585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.218594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.218843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.218852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.219083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.219093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.219361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.219371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.219639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.219648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.219802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.219811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.220081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.220090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.220350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.220359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.220597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.220606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.220755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.220765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.221013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.221022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.221271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.221280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.221526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.221534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.221709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.221718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.221921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.221930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.222080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.222089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.222344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.222354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.222517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.222526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.222674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.222683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.222918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.222927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.223079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.223089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.223237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.223247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.223369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.223378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.223531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.223541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.223837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.223845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.224094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.224103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.224350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.224359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.224553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.224562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.224744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.948 [2024-07-15 20:27:39.224753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.948 qpair failed and we were unable to recover it. 00:29:13.948 [2024-07-15 20:27:39.225005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.225014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.225181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.225190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.225426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.225435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.225607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.225616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.225763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.225773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.225953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.225962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.226182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.226191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.226435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.226445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.226665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.226674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.226843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.226852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.227094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.227103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.227324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.227333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.227522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.227531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.227710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.227720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.227976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.227984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.228245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.228258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.228495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.228504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.228674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.228683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.228929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.228938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.229155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.229163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.229436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.229445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.229699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.229708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.229928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.229937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.230124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.230132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.230302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.230311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.230582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.230591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.230840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.230849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:13.949 [2024-07-15 20:27:39.230966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.949 [2024-07-15 20:27:39.230975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:13.949 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.231263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.231273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.231450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.231460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.231708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.231718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.231982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.231991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.232153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.232162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.232410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.232423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.232526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.232535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.232696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.232705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.232853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.232862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.233028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.233221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.233230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.233477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.233487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.233717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.233727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.233947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.233956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.234040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.234049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.234288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.234297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.234458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.234467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.234713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.234722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.234842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.234851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.235101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.235110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.235294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.235304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.235478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.235487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.235732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.235740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.235958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.235967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.236083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.236092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.236191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.236204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.236456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.236465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.236725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.236734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.236965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.236974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.237224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.237407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.237417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.237683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.237692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.250 [2024-07-15 20:27:39.237876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.250 [2024-07-15 20:27:39.237885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.250 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.238129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.238138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.238392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.238423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.238707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.238736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.239019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.239049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.239347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.239378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.239662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.239671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.239915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.239924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.240116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.240125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.240357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.240388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.240530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.240559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.240889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.240919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.241132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.241141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.241246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.241259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.241479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.241488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.241673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.241682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.241929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.241938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.242187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.242196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.242443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.242452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.242604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.242613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.242876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.242907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.243225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.243280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.243576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.243607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.243880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.243910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.244103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.244133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.244414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.244445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.244636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.244645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.244824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.244833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.244996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.245025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.245316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.245346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.245643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.245652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.245886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.245895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.246142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.246151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.246408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.246419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.246658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.246667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.246924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.246933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.247190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.247199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.247433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.247442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.247703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.247712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.247935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.247944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.251 qpair failed and we were unable to recover it. 00:29:14.251 [2024-07-15 20:27:39.248171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.251 [2024-07-15 20:27:39.248180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.248358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.248367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.248612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.248642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.248862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.248891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.249184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.249213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.249515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.249546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.249836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.249866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.250162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.250192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.250414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.250444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.250702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.250711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.250906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.250915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.251031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.251039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.251290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.251314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.251549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.251559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.251746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.251754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.252001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.252030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.252311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.252342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.252638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.252667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.252964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.252994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.253294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.253325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.253620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.253650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.253942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.253972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.254164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.254429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.254461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.254745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.254754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.254974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.254983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.255149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.255159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.255380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.255390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.255573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.255582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.255744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.255753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.256032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.256062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.256300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.256331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.256547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.256576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.256773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.256802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.257071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.257101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.257408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.257439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.257715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.257725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.257912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.257921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.258117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.258125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.258397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.258407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.258686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.252 [2024-07-15 20:27:39.258716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.252 qpair failed and we were unable to recover it. 00:29:14.252 [2024-07-15 20:27:39.258923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.258952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.259216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.259246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.259528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.259558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.259893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.259922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.260120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.260150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.260313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.260343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.260644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.260674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.260965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.260995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.261268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.261299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.261427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.261450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.261656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.261666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.261919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.261928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.262122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.262131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.262230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.262252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.262517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.262545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.262755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.262785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.263094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.263123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.263394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.263426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.263716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.263746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.264037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.264073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.264365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.264375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.264542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.264551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.264748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.264778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.264946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.264976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.265272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.265303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.265568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.265597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.265800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.265830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.266114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.266144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.266290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.266320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.266617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.266627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.266800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.266810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.267072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.267081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.267335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.267345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.267513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.267523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.267760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.267790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.267991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.268020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.268298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.268330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.268542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.268572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.268913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.268942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.269267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.269298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.269536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.253 [2024-07-15 20:27:39.269566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.253 qpair failed and we were unable to recover it. 00:29:14.253 [2024-07-15 20:27:39.269834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.269864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.270069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.270099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.270327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.270336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.270527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.270537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.270711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.270741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.270986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.271016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.271225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.271283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.271552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.271582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.271861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.271870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.272040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.272049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.272242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.272284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.272500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.272530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.272823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.272853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.273147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.273177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.273373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.273405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.273593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.273603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.273830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.273839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.274009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.274019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.274251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.274301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.274501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.274531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.274824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.274854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.275159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.275189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.275461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.275471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.275630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.275639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.275883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.275892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.276091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.276101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.276407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.276438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.276594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.276623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.276760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.276790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.277007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.277037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.277149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.277179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.277402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.277433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.277656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.277687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.254 [2024-07-15 20:27:39.278008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.254 [2024-07-15 20:27:39.278037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.254 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.278311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.278342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.278661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.278691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.278887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.278917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.279129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.279159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.279374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.279405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.279697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.279727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.279875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.279885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.279968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.279977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.280207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.280411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.280421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.280657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.280686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.280830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.280860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.280999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.281028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.281267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.281298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.281543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.281572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.281859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.281868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.282113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.282122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.282346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.282356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.282509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.282518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.282724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.282754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.282986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.283015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.283155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.283185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.283404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.283414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.283660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.283669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.283800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.283812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.284002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.284027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.284327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.284358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.284624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.284654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.284968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.284997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.285280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.285311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.285450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.285480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.285744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.285753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.285998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.286008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.286180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.286192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.286299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.286308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.286542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.286551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.286764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.286773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.286999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.287008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.287109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.287129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.287292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.287302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.255 qpair failed and we were unable to recover it. 00:29:14.255 [2024-07-15 20:27:39.287522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.255 [2024-07-15 20:27:39.287532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.287733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.287762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.287998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.288027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.288226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.288266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.288469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.288478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.288670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.288700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.288965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.288995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.289193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.289223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.289360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.289370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.289620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.289630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.289786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.289796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.290029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.290038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.290197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.290206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.290456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.290466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.290616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.290626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.290799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.290808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.290962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.290991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.291132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.291162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.291402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.291433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.291621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.291630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.291891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.291921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.292143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.292173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.292488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.292519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.292785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.292795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.292940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.292951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.293052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.293061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.293214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.293223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.293331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.293340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.293520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.293530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.293787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.293816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.293959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.293989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.294140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.294169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.294404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.294413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.294633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.294642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.294719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.294727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.294838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.294846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.294925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.294934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.295130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.295140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.295364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.295374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.295595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.295604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.295699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.256 [2024-07-15 20:27:39.295707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-07-15 20:27:39.295875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.295884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.296135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.296165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.296380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.296411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.296617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.296647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.296785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.296795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.296906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.296915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.297062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.297072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.297198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.297208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.297439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.297449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.297563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.297572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.297730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.297739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.297822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.297830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.298050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.298059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.298177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.298187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.298368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.298377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.298497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.298506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.298617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.298627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.298779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.298788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.299038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.299047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.299216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.299225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.299387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.299397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.299578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.299587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.299739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.299748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.299901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.299912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.300861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.300871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.301089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.301098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.301373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.301383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.301462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.301471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.301620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.301629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.301799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.301809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.301992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.302001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.302194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.302204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.302396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.302406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.302609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.257 [2024-07-15 20:27:39.302639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-07-15 20:27:39.302933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.302962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.303240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.303282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.303472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.303581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.303593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.303703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.303712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.303877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.303886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.304056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.304065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.304270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.304301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.304483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.304513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.304643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.304672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.304888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.304918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.305125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.305156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.305361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.305392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.305547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.305577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.305771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.305801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.305930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.305959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.306276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.306308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.306449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.306479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.306693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.306703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.306868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.306877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.307063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.307092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.307288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.307539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.307569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.307800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.307810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.307979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.307988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.308198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.308207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.308379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.308410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.308554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.308584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.308812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.308841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.309130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.309159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.309396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.309426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.309569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.309599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.309863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.309893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.310008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.310037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.310346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-07-15 20:27:39.310378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-07-15 20:27:39.310604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.310633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.310878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.310908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.311181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.311211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.311386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.311418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.311646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.311676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.311801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.311810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.311977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.311986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.312137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.312146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.312392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.312402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.312481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.312489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.312516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9eea0 (9): Bad file descriptor 00:29:14.259 [2024-07-15 20:27:39.312764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.312799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.312926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.312942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.313119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.313129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.313332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.313363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.313588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.313617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.313917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.313947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.314171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.314201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.314435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.314444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.314610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.314640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.314793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.314822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.314954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.314983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.315140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.315170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.315377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.315409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.315563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.315572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.315722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.315731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.315946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.315975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.316208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.316237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.316464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.316474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.316721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.316730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.316915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.316924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.317040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.317049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.317338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.317348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.317516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.317526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.317728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.317737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.317854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.317864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.318084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.318093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.318257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.318266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.318445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.318454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.318606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.318615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.318716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.318726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.259 [2024-07-15 20:27:39.318913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.259 [2024-07-15 20:27:39.318922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.259 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.319110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.319121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.319419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.319428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.319670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.319679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.319833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.319842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.319951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.319960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.320049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.320058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.320241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.320250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.320349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.320358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.320597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.320607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.320773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.320783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.321060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.321090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.321377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.321409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.321643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.321673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.321889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.321918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.322212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.322242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.322538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.322568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.322783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.322792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.322944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.322953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.323213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.323243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.323572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.323603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.323819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.323827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.323995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.324005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.324221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.324230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.324404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.324413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.324569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.324579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.324876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.324905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.325213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.325243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.325491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.325501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.325748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.325757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.325928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.325937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.326155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.326164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.326417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.326448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.326586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.326616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.326879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.326909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.327225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.327280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.327461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.327470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.327619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.327628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.327733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.327742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.327908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.327917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.328171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.328181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.260 [2024-07-15 20:27:39.328343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.260 [2024-07-15 20:27:39.328354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.260 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.328526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.328536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.328762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.328771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.328884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.328893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.329089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.329344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.329354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.329547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.329556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.329655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.329667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.329914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.329923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.330173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.330182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.330346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.330356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.330467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.330476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.330667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.330676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.330957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.330966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.331120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.331129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.331407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.331438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.331654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.331683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.331984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.332013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.332302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.332334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.332553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.332562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.332810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.332819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.333022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.333031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.333219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.333248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.333414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.333445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.333747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.333778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.334090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.334119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.334431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.334462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.334750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.334780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.334986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.335015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.335302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.335333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.335536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.335546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.335810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.335820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.336011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.336020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.336314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.336323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.336495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.336507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.336757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.336786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.337026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.337056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.337355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.337385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.337663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.337701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.337962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.337971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.338142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.338153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.261 [2024-07-15 20:27:39.338314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.261 [2024-07-15 20:27:39.338324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.261 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.338498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.338507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.338631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.338640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.338802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.338811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.338937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.338947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.339989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.339998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.340977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.340986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.341149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.341158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.341273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.341282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.341382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.341391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.341564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.341573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.341732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.341771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.341969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.341999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.342234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.342275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.342453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.342463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.342572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.342581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.342800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.342809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.342959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.342968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.343141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.343171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.343381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.343412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.343556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.343585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.343723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.343732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.343842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.343851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.344013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.344022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.344126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.344136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.344316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.344327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.344607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.344637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.262 qpair failed and we were unable to recover it. 00:29:14.262 [2024-07-15 20:27:39.344839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.262 [2024-07-15 20:27:39.344874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.345067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.345097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.345265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.345295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.345444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.345473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.345680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.345710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.345837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.345866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.345995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.346021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.346219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.346228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.346446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.346455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.346623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.346632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.346788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.346796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.347049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.347078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.347242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.347287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.347420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.347450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.347667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.347696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.347981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.348010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.348124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.348153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.348419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.348428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.348650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.348669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.348763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.348772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.348958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.348967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.349135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.349144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.349312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.349332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.349425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.349433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.349661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.349671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.349772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.349780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.350930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.350939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.351160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.351169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.351350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.351359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.351462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.351471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.351555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.351564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.263 [2024-07-15 20:27:39.351678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.263 [2024-07-15 20:27:39.351687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.263 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.351774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.351782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.351941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.351951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.352061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.352072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.352291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.352301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.352464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.352473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.352643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.352652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.352743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.352752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.352995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.353004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.353110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.353119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.353290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.353300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.353468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.353477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.353681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.353711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.354038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.354067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.354379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.354410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 216624 Killed "${NVMF_APP[@]}" "$@" 00:29:14.264 [2024-07-15 20:27:39.354649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.354680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.354950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.354980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:14.264 [2024-07-15 20:27:39.355193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.355225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.355358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.355390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:14.264 [2024-07-15 20:27:39.355592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.355624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.264 [2024-07-15 20:27:39.355817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.355848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.356025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.264 [2024-07-15 20:27:39.356155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.356320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.264 [2024-07-15 20:27:39.356448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.356647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.356806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.356976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.356985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.357123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.357132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.357356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.357376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.357526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.357537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.357703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.357712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.357878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.357887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.358146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.358176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.358384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.358415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.358559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.358589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.358724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.358734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.358978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.359007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.359218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.359247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.264 qpair failed and we were unable to recover it. 00:29:14.264 [2024-07-15 20:27:39.359477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.264 [2024-07-15 20:27:39.359486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.359585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.359593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.359755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.359765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.359988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.360018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.360175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.360205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.360456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.360500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.360734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.360743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.360913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.360922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.361176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.361206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.361342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.361372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.361587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.361617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.361818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.361827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.362074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.362083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.362242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.362251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.362421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.362465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.362702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.362737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=217563 00:29:14.265 [2024-07-15 20:27:39.362887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.362918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.363131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.363161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 217563 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:14.265 [2024-07-15 20:27:39.363429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.363461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.363603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 217563 ']' 00:29:14.265 [2024-07-15 20:27:39.363634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.363823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.363832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.265 [2024-07-15 20:27:39.363923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.363933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.364095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.364104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.265 [2024-07-15 20:27:39.364291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.364302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.364402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.364413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.364520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.364529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.265 [2024-07-15 20:27:39.364707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.364718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.265 [2024-07-15 20:27:39.364935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.364946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.365072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.365084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 20:27:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.265 [2024-07-15 20:27:39.365201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.365211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.365432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.365442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.365609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.365620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.365790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.365798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.365979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.365987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.366207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.366216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.366377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.366387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-07-15 20:27:39.366622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.265 [2024-07-15 20:27:39.366631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.369546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.369558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.369808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.369817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.369999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.370007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.370225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.370399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.370408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.370576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.370585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.370654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.370662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.370844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.370853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.371077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.371086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.371335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.371344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.371435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.371444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.371603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.371612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.371775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.371784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.371953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.371963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.372058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.372066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.372228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.372238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.372415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.372425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.372601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.372610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.372805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.372814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.372980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.372990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.373161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.373171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.373341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.373352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.373439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.373448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.373626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.373635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.373859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.373868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.374971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.374980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.375174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.375184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.375354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.375364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.375524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.375534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.375687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.375696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.375947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.375955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.376200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.376209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.376385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.376394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-07-15 20:27:39.376544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.266 [2024-07-15 20:27:39.376556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.376708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.376717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.376832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.376841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.377974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.377983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.378226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.378235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.378427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.378438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.378607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.378616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.378727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.378737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.378832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.378840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.378990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.378999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.379950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.379959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.380920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.380930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.381012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.381020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.381112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.381120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.381203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.381211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.381347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.381356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.381541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.267 [2024-07-15 20:27:39.381550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.267 qpair failed and we were unable to recover it. 00:29:14.267 [2024-07-15 20:27:39.381726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.381736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.381900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.381909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.382091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.382277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.382391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.382495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.382672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.382831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.382995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.383005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.383166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.383175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.383325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.383335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.383495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.383504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.383688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.383696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.383853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.383863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.384961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.384969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.385132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.385141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.385332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.385342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.385495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.385504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.385658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.385667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.385833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.385842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.385912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.385921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.386087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.386096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.386246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.386259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.386446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.386455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.386564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.386574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.386710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.386744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.386867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.386903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.387046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.387062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.387237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.387252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.387490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.387505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.387685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.387699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.387803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.387815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.388037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.388046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.388138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.388148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.268 [2024-07-15 20:27:39.388297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.268 [2024-07-15 20:27:39.388307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.268 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.388397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.388405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.388503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.388512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.388741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.388750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.388848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.388857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.389984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.389992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.390962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.390971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.391193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.391202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.391339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.391349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.391521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.391530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.391700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.391710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.391898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.392159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.392328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.392494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.392596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.392685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.392847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.392996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.393963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.393972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.394122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.394131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.394224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.394234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.394325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.269 [2024-07-15 20:27:39.394334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.269 qpair failed and we were unable to recover it. 00:29:14.269 [2024-07-15 20:27:39.394420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.394429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.394602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.394611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.394739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.394748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.394897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.394906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.395840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.395850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.396922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.396931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.397957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.397966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.398196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.398205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.398333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.398342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.398436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.398445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.398597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.398606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.398763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.398774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.398868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.398877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.399028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.399037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.399136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.399145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.399304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.399313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.399591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.399600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.399698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.399707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.399872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.399882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.400045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.400054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.400116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.400124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.400292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.270 [2024-07-15 20:27:39.400301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.270 qpair failed and we were unable to recover it. 00:29:14.270 [2024-07-15 20:27:39.400524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.400533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.400691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.400780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.400788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.400881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.400890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.401756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.401766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.402095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.402105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.402261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.402271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.402378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.402388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.402482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.402491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.402740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.402749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.402839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.402848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.403044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.403053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.403274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.403283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.403378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.403387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.403500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.403509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.403712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.403720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.403916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.403925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.404820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.404994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.405003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.405155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.405165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.405269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.405278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.405460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.405469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.405564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.271 [2024-07-15 20:27:39.405572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.271 qpair failed and we were unable to recover it. 00:29:14.271 [2024-07-15 20:27:39.405666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.405675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.405870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.405879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.405971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.405980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.406910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.406918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.407878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.407887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.408982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.408990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.409085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.409094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.409263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.409273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.409451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.409460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.409536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.409544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.409661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.409669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.409845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.409855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410268] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:29:14.272 [2024-07-15 20:27:39.410317] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.272 [2024-07-15 20:27:39.410352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.410938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.410947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.411026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.411034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.272 [2024-07-15 20:27:39.411262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.272 [2024-07-15 20:27:39.411272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.272 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.411369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.411379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.411478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.411487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.411590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.411599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.411703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.411712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.411929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.411939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.412054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.412063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.412245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.412262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.412353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.412363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.412535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.412545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.412703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.412713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.412933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.412942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.413939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.413948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.414943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.414953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.415043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.415054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.415204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.415213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.415334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.415344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.415514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.415523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.415617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.415626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.415796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.415805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.416875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.416884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.417118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.417126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.273 [2024-07-15 20:27:39.417288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.273 [2024-07-15 20:27:39.417298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.273 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.417396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.417412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.417641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.417651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.417897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.417906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.418070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.418079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.418170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.418179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.418345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.418422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc90d70 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.418654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.418672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.418847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.418861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.419021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.419035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.419284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.419300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.419405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.419419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.419585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.419600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.419808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.419828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.419936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.419950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.420974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.420984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.421951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.421961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.422126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.422135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.422297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.422307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.422402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.422411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.422576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.422584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.422686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.422695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.422978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.422987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.423137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.423146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.423225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.423234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.423328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.423337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.423449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.423458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.274 [2024-07-15 20:27:39.423573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.274 [2024-07-15 20:27:39.423583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.274 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.423738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.423747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.423992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.424099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.424283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.424459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.424568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.424659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.424916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.424925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.425966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.425975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.426059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.426067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.426168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.426177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.426405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.426415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.426586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.426595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.426755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.426763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.426909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.426918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.427037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.427046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.427194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.427203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.427390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.427400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.427567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.427575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.427746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.427755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.427877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.427886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.428143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.428152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.428326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.428335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.428422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.428430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.428578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.428587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.428809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.428818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.428913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.428922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.429171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.429180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.429413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.275 [2024-07-15 20:27:39.429422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.275 qpair failed and we were unable to recover it. 00:29:14.275 [2024-07-15 20:27:39.429575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.429593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.429679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.429689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.429822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.429831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.429921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.429930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.430151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.430160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.430322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.430332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.430414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.430426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.430595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.430604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.430696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.430706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.430799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.430808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.431058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.431067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.431236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.431245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.431329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.431338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.431516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.431525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.431744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.431753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.431960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.431969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.432122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.432131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.432294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.432303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.432499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.432509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.432677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.432686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.432785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.432795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.433026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.433035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.433223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.433232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.433386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.433395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.433623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.433632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.433785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.433793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.433954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.433963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.434133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.434142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.434393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.434403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.434497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.434506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.434695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.434704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.434898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.434907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.435088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.435097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.435261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.435271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.435457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.435466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.435632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.435641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.435887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.435897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.435979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.435988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.436062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.436070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.436288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.436298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.436390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.276 [2024-07-15 20:27:39.436399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.276 qpair failed and we were unable to recover it. 00:29:14.276 [2024-07-15 20:27:39.436502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.436511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.436679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.436687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.436845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.436854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.437903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.437995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.438096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.438283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.438397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.438485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.438795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.438990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.438999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.439102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.439219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.439388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.277 [2024-07-15 20:27:39.439498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.439593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.439785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.439900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.440026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.440035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.440203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.440212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.440517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.440527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.440765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.440774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.440844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.440852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.440948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.440956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.441176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.441185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.441345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.441355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.441542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.441551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.441721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.441730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.441961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.441970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.442052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.442061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.442211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.442221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.442420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.442429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.442524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.442533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.277 [2024-07-15 20:27:39.442766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.277 [2024-07-15 20:27:39.442775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.277 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.442948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.442957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.443143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.443152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.443315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.443325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.443403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.443411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.443662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.443671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.443847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.443856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.443951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.443960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.444127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.444137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.444234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.444243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.444506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.444515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.444761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.444770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.444868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.444876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.445028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.445037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.445138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.445147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.445397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.445406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.445499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.445510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.445671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.445680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.445832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.445842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.446867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.446876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.447062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.447071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.447161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.447169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.447334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.447343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.447436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.447444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.447597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.447606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.447852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.447861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.448064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.448073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.448171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.448180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.448428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.448437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.448682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.448691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.448839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.448848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.449102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.449111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.449167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.449175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.449419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.449428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.449600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.449609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.278 [2024-07-15 20:27:39.449770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.278 [2024-07-15 20:27:39.449779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.278 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.449889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.449898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.450126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.450135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.450235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.450245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.450399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.450409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.450573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.450581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.450732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.450741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.450910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.450921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.451963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.451972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.452120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.452129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.452303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.452312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.452583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.452592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.452750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.452759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.452952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.452961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.453985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.453993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.454090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.454099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.454268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.454277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.454498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.454507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.454583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.454591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.454838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.454847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.455871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.455880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.456043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.456052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.279 qpair failed and we were unable to recover it. 00:29:14.279 [2024-07-15 20:27:39.456134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.279 [2024-07-15 20:27:39.456142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.456293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.456302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.456479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.456488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.456586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.456595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.456814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.456823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.456989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.456997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.457162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.457173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.457263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.457272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.457489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.457498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.457681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.457691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.457782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.457791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.457973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.457981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.458063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.458071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.458314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.458323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.458489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.458498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.458661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.458670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.458819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.458828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.458920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.458930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.459187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.459195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.459356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.459366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.459530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.459539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.459684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.459693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.459796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.459805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.460028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.460037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.460183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.460193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.460374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.460384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.460586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.460595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.460747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.460756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.460933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.460942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.461951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.280 [2024-07-15 20:27:39.461960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.280 qpair failed and we were unable to recover it. 00:29:14.280 [2024-07-15 20:27:39.462116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.462125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.462268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.462277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.462426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.462435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.462539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.462548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.462714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.462723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.462863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.462871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.463978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.463986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.464070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.464078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.464241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.464250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.464368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.464377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.464568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.464577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.464723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.464732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.464886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.464895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.465000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.465010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.465182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.465191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.465352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.465361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.465539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.465548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.465742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.465751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.465902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.465910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.466005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.466014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.466187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.466197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.466432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.466441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.466591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.466600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.466705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.466714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.467003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.467012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.467166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.467175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.467337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.467346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.467508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.467517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.467613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.467622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.467811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.467820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.468090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.468099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.468193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.468203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.468352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.468361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.468483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.468492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.468601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.281 [2024-07-15 20:27:39.468610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.281 qpair failed and we were unable to recover it. 00:29:14.281 [2024-07-15 20:27:39.468776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.468785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.468941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.468950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.469198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.469207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.469425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.469434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.469583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.469592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.469670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.469678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.469863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.469872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.470008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.470017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.470202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.470211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.470386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.470397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.470505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.470514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.470665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.470674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.470896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.470905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.471012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.471021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.471158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.471166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.471355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.471364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.471556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.471565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.471770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.471779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.471933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.471942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.472114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.472123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.472211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.472221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.472393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.472403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.472578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.472587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.472751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.472760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.472845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.472853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.473910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.473919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.474901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.474909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.475075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.475084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.475204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.282 [2024-07-15 20:27:39.475213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.282 qpair failed and we were unable to recover it. 00:29:14.282 [2024-07-15 20:27:39.475392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.475401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.475550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.475559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.475747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.475756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.475862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.475870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.476955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.476964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.477125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.477134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.477281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.477291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.477458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.477467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.477554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.477562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.477714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.477724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.477826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.477835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.478939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.478947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.479131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.479140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.479376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.479385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.479544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.479553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.479642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.479652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.479799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.479808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.480967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.480975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.481082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.481091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.481192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.481201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.481396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.481405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.481506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.283 [2024-07-15 20:27:39.481515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.283 qpair failed and we were unable to recover it. 00:29:14.283 [2024-07-15 20:27:39.481678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.481687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.481788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.481797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.481895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.481904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.482960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.482969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.483216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.483225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.483458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.483638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.483647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.483836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.483845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.484048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.484056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.484153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.484162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.484407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.484416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.484567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.484577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.484676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.484686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.484839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.484848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.485876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.485885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.486045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.486054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.486279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.486289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.486396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.486405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.486586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.486595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.486692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.486701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.486918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.486927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.487026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.487035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.487207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.284 [2024-07-15 20:27:39.487217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.284 qpair failed and we were unable to recover it. 00:29:14.284 [2024-07-15 20:27:39.487298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.487307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.487476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.487485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.487638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.487648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.487750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.487760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.487864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.487873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.488024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.488033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.488181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.488190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.488288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.488297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.488481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.488490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.488721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.488729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.488819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.488828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.489045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.489054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.489143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.489152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.489233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.489241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.489466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.489475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.489627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.489635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.489793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.489802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.490019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.490027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.490126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.490135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.490314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.490323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.490566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.490575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.490743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.490752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.490863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.490872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.491944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.492197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.492206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.492364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.492374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.492477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.492486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.492640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.492649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.492743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.492752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.492944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.492953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.493043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.493051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.493141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.493149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.493297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.493307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.493385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.493394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.493469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.285 [2024-07-15 20:27:39.493477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.285 qpair failed and we were unable to recover it. 00:29:14.285 [2024-07-15 20:27:39.493584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.493593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.493881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.493890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.493985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.493993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.494106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.494115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.494366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.494375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.494530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.494539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.494691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.494700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.494932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.494941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.495046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.495055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.495204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.495213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.495363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.495373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.495464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.495472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.495642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.495651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.495868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.495876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.496849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.496858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.497009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.497018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.497184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.497193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.497412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.497422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.497639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.497750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.497759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.497919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.497928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.498076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.498084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.498271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.498281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.498432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.498442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.498680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.498689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.498835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.498844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.499012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.499021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.499202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.499211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.499527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.499536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.499784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.499793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.500063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.500072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.500224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.500233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.500397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.500407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.500521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.500530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.286 [2024-07-15 20:27:39.500716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.286 [2024-07-15 20:27:39.500725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.286 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.500872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.500881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.500989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.500998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.501243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.501252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.501352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.501361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.501603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.501612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.501785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.501794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.501965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.501974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.502144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.502153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.502264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.502273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.502381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.502390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.502633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.502642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.502804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.502813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.502901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.502910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.503059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.503068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.503220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.503229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.503325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.503334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.503616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.503625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.503773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.503782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.504047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.504056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.504232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.504241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.504480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.504489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.504740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.504749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.504909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.504918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.505015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.505023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.505295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.505305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.505464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.505473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.505743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.505752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.505999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.506008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.506246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.506258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.506425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.506434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.506652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.506661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.506829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.506838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.507004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.507013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.507103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.507112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.507262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.507272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.507491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.507499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.507763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.507772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.507932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.507943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.508029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.508037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.508133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.508142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.287 [2024-07-15 20:27:39.508360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.287 [2024-07-15 20:27:39.508370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.287 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.508521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.508530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.508756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.508765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.508928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.508937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.509111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.509120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.509232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.509241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.509395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.509405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.509630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.509639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.509868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.509876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.510111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.510120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.510272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.510281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.510429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.510439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.510601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.510610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.510792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.510801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.510906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.510915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.511924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.511933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.512120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.512129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.512295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.512305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.512526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.512535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.512650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.512658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.512825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.512834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.513913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.513921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.514166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.514174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.514369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.514378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.288 qpair failed and we were unable to recover it. 00:29:14.288 [2024-07-15 20:27:39.514526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.288 [2024-07-15 20:27:39.514535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.514702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.514713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.514939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.514948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.515122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.515131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.515357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.515366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.515512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.515521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.515686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.515695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.515861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.515870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.516021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.516030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.516274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.516283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.516442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.516451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.516650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.516659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.516928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.516937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.517084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.517093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.517248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.517261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.517417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.517427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.517590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.517599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.517818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.517826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.517916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.517924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.518032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.518041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.518144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.518153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.518318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.518327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.518477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.518486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.518717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.518726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.518909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.518918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.519982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.519990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.289 [2024-07-15 20:27:39.520164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.520895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.520904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.521123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.521132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.521353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.289 [2024-07-15 20:27:39.521363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.289 qpair failed and we were unable to recover it. 00:29:14.289 [2024-07-15 20:27:39.521491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.521500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.521650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.521659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.521756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.521765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.521983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.521992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.522155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.522164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.522396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.522406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.522566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.522575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.522757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.522766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.522863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.522872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.522980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.522990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.523159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.523168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.523263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.523271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.523367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.523376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.523616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.523627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.523715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.523723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.523916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.523926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.524987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.524996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.525101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.525110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.525261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.525271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.525452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.525461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.525629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.525638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.525744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.525753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.525907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.525917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.526172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.526181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.526368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.526378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.526528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.526537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.526695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.526704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.526855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.526865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.527050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.527059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.527219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.527228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.527394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.527404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.527586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.527595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.527756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.527766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.527917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.527927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.290 [2024-07-15 20:27:39.528108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.290 [2024-07-15 20:27:39.528117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.290 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.528346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.528356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.528601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.528611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.528779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.528788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.528906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.528915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.529018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.529027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.529210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.529220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.529416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.529426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.529514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.529522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.529740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.529750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.529915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.529925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.530086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.530095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.530275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.530284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.530434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.530445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.530641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.530651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.530727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.530736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.530837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.530847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.531001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.531011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.531158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.531168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.531360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.531370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.531592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.531601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.531706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.531716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.531984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.531994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.532160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.532169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.532338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.532349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.532461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.532470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.532626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.532635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.532807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.532972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.532982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.533230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.533240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.533405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.533415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.533504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.533512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.533706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.533715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.533870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.533879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.534859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.534868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.535017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.535026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.535245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.291 [2024-07-15 20:27:39.535257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.291 qpair failed and we were unable to recover it. 00:29:14.291 [2024-07-15 20:27:39.535421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.535430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.535534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.535543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.535640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.535649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.535817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.535826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.535943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.535952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.536172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.536181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.536351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.536361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.536542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.536551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.536748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.536757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.536864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.536873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.537024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.537035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.537185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.537194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.537371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.537380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.537540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.537549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.537769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.537778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.537938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.537946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.538096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.538105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.538203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.538213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.538307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.538315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.538496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.538505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.538755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.538764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.538996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.539005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.539101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.539111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.539273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.539282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.539559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.539568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.539733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.539742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.540022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.540031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.540225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.540234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.540327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.540336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.540552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.540561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.540727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.540736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.540979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.540988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.541150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.541158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.541412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.541421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.541662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.541671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.541819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.541828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.541994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.542003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.542170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.542179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.542418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.292 [2024-07-15 20:27:39.542428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.292 qpair failed and we were unable to recover it. 00:29:14.292 [2024-07-15 20:27:39.542587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.542596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.542750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.542759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.542837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.542845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.542988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.542997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.543264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.543273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.543413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.543421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.543512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.543521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.543668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.543677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.543857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.543865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.544054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.544062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.544211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.544219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.544327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.544338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.544492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.544501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.544768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.544777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.544936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.544945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.545108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.545117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.545298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.545308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.545416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.545425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.545620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.545629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.545723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.545732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.545832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.545841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.546923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.546932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.547124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.547133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.547284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.547294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.547408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.547417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.293 [2024-07-15 20:27:39.547590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.293 [2024-07-15 20:27:39.547598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.293 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.547690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.547699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.547918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.547927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.548084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.548093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.548251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.548266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.548431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.548440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.548554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.548563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.548730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.548739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.548927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.548936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.549899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.549908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.550021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.550030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.550304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.550313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.550477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.550486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.550588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.550596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.550775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.550953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.550962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.551073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.551082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.551187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.551196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.551293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.551301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.551518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.551527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.551747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.551756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.551846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.551855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.552013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.552022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.552193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.552202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.552454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.552463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.552615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.552624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.552762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.552771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.552994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.553173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.553262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.553438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.553635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.553756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.553920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.553929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.294 [2024-07-15 20:27:39.554034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.294 [2024-07-15 20:27:39.554043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.294 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.554270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.554279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.554440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.554449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.554683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.554692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.554903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.554912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.555011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.555019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.555237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.555247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.555430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.555441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.555604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.555614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.555840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.555849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.555933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.555942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.556022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.556030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.556202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.556211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.556363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.556373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.556576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.556585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.556767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.556776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.556995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.557960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.557968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.558119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.558128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.558314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.558324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.558444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.295 [2024-07-15 20:27:39.558453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.295 qpair failed and we were unable to recover it. 00:29:14.295 [2024-07-15 20:27:39.558602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.558612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.558709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.558718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.558813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.558824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.558983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.558993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.559088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.559099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.559207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.559217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.559337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.559346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.559517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.559526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.559646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.559655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.559802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.559811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.560991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.582 [2024-07-15 20:27:39.560999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.582 qpair failed and we were unable to recover it. 00:29:14.582 [2024-07-15 20:27:39.561101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.561213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.561323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.561496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.561692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.561800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.561916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.561925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.562912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.562921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.563842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.563851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.564090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.564099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.564195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.564204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.564357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.564367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.564518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.564527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.564683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.564692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.564776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.564785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.565293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.565302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.565502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.565512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.565677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.565686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.565834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.565843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.565938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.565947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.566096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.566105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.566194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.566202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.566420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.566429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.566539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.566548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.566753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.566762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.566880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.566889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.567002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.567011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.583 qpair failed and we were unable to recover it. 00:29:14.583 [2024-07-15 20:27:39.567179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.583 [2024-07-15 20:27:39.567188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.567283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.567293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.567436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.567447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.567714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.567723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.567940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.567949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.568051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.568060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.568252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.568281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.568496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.568505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.568670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.568679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.568837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.568846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.568947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.568956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.569058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.569067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.569246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.569260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.569370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.569379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.569546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.569555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.569649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.569658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.569844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.569853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.570112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.570121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.570222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.570231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.570455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.570465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.570579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.570588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.570776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.570785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.571877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.571886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.572107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.572116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.572199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.572208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.572389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.572398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.572551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.572560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.572709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.572718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.572908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.572917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.573077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.573086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.573237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.573246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.573424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.573449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.573682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.573697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.573816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.573830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.574003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.584 [2024-07-15 20:27:39.574017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.584 qpair failed and we were unable to recover it. 00:29:14.584 [2024-07-15 20:27:39.574107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.574121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.574302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.574322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.574437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.574452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.574695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.574709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.574818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.574832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.575015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.575029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.575207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.575221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.575394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.575408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.575652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.575663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.575841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.575850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.575934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.575943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.576093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.576102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.576318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.576514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.576522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.576693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.576702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.576815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.576824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.576981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.576990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.577155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.577164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.577274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.577284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.577504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.577514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.577592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.577600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.577696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.577705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.577866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.577875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.578864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.578873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.579054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.579063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.579218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.579227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.579472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.579482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.579657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.579667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.579761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.579770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.579875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.579884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.580031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.580040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.580211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.580220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.580314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.580323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.580410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.580419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.585 [2024-07-15 20:27:39.580575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.585 [2024-07-15 20:27:39.580585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.585 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.580799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.580809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.580902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.580910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.581157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.581166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.581330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.581339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.581512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.581520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.581710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.581720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.581901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.581910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.582968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.582977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.583131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.583140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.583234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.583242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.583430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.583440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.583608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.583617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.583861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.583870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.583959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.583969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.584127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.584137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.584420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.584430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.584579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.584588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.584756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.584766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.584933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.584942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.585034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.585044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.585127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.585136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.585358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.585368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.585587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.585596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.585748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.585757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.585865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.585874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.586052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.586062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.586163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.586 [2024-07-15 20:27:39.586172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.586 qpair failed and we were unable to recover it. 00:29:14.586 [2024-07-15 20:27:39.586404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.586414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.586574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.586582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.586753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.586762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.586985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.586994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.587074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.587083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.587279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.587288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.587456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.587465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.587564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.587576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.587748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.587757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.587987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.587995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.588161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.588170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.588314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.588324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.588542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.588551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.588797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.588807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.589877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.589888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.590108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.590119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.590313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.590323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.590511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.590521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.590736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.590746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.590835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.590843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.590918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.590926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.591147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.591157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.591404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.591418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.591508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.591516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.591693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.591705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.591886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.591897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.591998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.592009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.592265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.592275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.592385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.592544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.592555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.592801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.592812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.593065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.593075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.593177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.593186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.593339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.593350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.587 [2024-07-15 20:27:39.593554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.587 [2024-07-15 20:27:39.593564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.587 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.593716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.593726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.593944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.593955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.594071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.594080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.594306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.594316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.594458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.594468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.594739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.594749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.594969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.594983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.595216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.595225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.595387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.595397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.595572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.595582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.595802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.595812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.596053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.596064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.596147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.596156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.596327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.596337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.596510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.596520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.596670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.596680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.596847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.596856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.597030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.597040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.597148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.597158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.597311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.597322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.597493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.597503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.597729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.597739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.597914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.597923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.598076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.598085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.598314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.598324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.598415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.598424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.598519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.598528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.598704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.598714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.598936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.598946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.599029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.599037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.599267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.599277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.599428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.599438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.599699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.599709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.599885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.599916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.600128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.600148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.600359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.600375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.600536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.600551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.600655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.600669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.600841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.600856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.601016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.601027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.588 [2024-07-15 20:27:39.601251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.588 [2024-07-15 20:27:39.601267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.588 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.601410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.601420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.601511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.601519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.601671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.601680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.601754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.601763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.601996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.602005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.602221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.602233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.602412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.602422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.602518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.602527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.602689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.602698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.602916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.602926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.603174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.603184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.603415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.603425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.603577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.603587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.603746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.603755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.603833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.603841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.604966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.604975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.605145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.605154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.605333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.605343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.605524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.605534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.605705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.605714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.605798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.605806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.605889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.605897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.606950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.606959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.607192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.607201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.607445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.607455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.607644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.607653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.607865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.607874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.608043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.608052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.589 [2024-07-15 20:27:39.608147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.589 [2024-07-15 20:27:39.608156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.589 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.608307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.608317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.608417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.608426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.608503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.608512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.608607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.608615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.608792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.608803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.608954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.608963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.609197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.609206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.609366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.609376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.609537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.609546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.609766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.609775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.609944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.609953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.610096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.610106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.610262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.610272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.610386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.610395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.610505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.610514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.610786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.610795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.610907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.610916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.611084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.611093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.611188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.611196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.611362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.611371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.611593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.611602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.611753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.611762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.611925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.611934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.612052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.612061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.612143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.612151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.612301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.612311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.612491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.612500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.612652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.612661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.612836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.612845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.613020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.613029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.613193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.613202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.613470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.613480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.613631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.613640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.613750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.613759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.613974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.613983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.614165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.614174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.614283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.614292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.614443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.614453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.614722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.614731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.614844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.590 [2024-07-15 20:27:39.614853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.590 qpair failed and we were unable to recover it. 00:29:14.590 [2024-07-15 20:27:39.615020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.615887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.615895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.616096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.616105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.616184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.616192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.616297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.616305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.616454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.616463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.616700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.616709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.616860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.616870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.617103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.617112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.617270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.617279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.617376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.617385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.617569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.617578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.617765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.617774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.617856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.617864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.618969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.618978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.591 [2024-07-15 20:27:39.619934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.591 [2024-07-15 20:27:39.619943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.591 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.620856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.620865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.621018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.621027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.621193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.621202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.621352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.621363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.621606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.621614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.621732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.621741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.621892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.621901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.622083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.622092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.622354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.622363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.622518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.622527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.622679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.622688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.622782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.622790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.622966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.622975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.623063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.623074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.623224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.623233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.623502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.623511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.623591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.623600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.623750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.623759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.623913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.623922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.624100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.624109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.624327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.624336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.624600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.624609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.624803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.624812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.624967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.624976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.625139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.625148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.625245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.625266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.625417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.625426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.625565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.625574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.625761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.625770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.625958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.625967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.626078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.626087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.626239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.626248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.626402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.626411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.626505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.592 [2024-07-15 20:27:39.626514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.592 qpair failed and we were unable to recover it. 00:29:14.592 [2024-07-15 20:27:39.626595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.626603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.626768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.626777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.626955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.626963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.627986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.627998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.628166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.628174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.628338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.628348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.628513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.628522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.628670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.628679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.628758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.628767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.628859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.628868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.629902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.629911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.630938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.630947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.631112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.631121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.631341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.631350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.631576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.631585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.631848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.631857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.631959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.631969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.632148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.632157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.632364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.632374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.632611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.632619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.632780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.632789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.632943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.632952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.633203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.593 [2024-07-15 20:27:39.633212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.593 qpair failed and we were unable to recover it. 00:29:14.593 [2024-07-15 20:27:39.633432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.633441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.633521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.633529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.633692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.633701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.633939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.633948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.634097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.634106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.634340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.634349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.634504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.634513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.634744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.634753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.634843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.634852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.635025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.635034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.635258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.635267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.635431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.635440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.635590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.635600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.635714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.635723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.635968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.635977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.636074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.636083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.636267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.636276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.636458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.636467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.636635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.636863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.636873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.636962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.636971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.637085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.637094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.637343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.637352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.637537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.637546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.637639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.637647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.637744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.637753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.637847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.637856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.638023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.638032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.638181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.638189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.638385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.638394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.638558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.638566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.638675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.638684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.638887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.638896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.639123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.639132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.639371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.639380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.639551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.639561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.639783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.639791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.639896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.639905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.640000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.640009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.640173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.640182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.594 [2024-07-15 20:27:39.640403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.594 [2024-07-15 20:27:39.640412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.594 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.640645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.640654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.640771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.640780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.641981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.641990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.642145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.642154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.642399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.642408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.642526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.642535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.642683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.642692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.642787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.642796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.643018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.643027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.643188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.643197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.643370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.643379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.643551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.643560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.643728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.643736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.643952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.643961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.644061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.644070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.644267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.644276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.644454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.644462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.644561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.644570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.644721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.644730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.644948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.644957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.645133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.645142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.645363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.645372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.645489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.645498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.645673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.645681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.645841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.645849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.645998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.646007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.646152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.646161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.646422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.646432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.595 qpair failed and we were unable to recover it. 00:29:14.595 [2024-07-15 20:27:39.646667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.595 [2024-07-15 20:27:39.646678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.646824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.646833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.647079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.647088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.647195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.647204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.647375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.647384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.647470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.647478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.647758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.647767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.647977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.647986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.648206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.648214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.648368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.648393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.648567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.648575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.648792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.648801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.648912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.648922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.649082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.649092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.649262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.649273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.649492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.649502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.649670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.649679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.649788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.649797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.650046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.650055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.650243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.650252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.650502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.650511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.650679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.650687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.650795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.650803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.650886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.650894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.651064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.651244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.651406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.651504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.651734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.651880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.651977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.652964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.652973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.653070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.653079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.653267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.653276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.653440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.653450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.596 qpair failed and we were unable to recover it. 00:29:14.596 [2024-07-15 20:27:39.653602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.596 [2024-07-15 20:27:39.653611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.653729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.653738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.653825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.653833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.653988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.653997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.654156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.654165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.654434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.654444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.654606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.654614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.654764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.654773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.654881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.654890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.655117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.655126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.655302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.655311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.655564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.655573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.655733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.655741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.655961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.655970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.656166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.656175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.656371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.656380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.656541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.656550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.656644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.656652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.656799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.656808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.657988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.657998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.658982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.658992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.659854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.659867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.660092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.597 [2024-07-15 20:27:39.660101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.597 qpair failed and we were unable to recover it. 00:29:14.597 [2024-07-15 20:27:39.660216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.660241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.660436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.660445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.660556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.660566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.660834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.660844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.660939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.660948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.661130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.661140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.661337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.661347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.661518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.661528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.661636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.661646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.661873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.661884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.662041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.662051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.662233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.662242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.662447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.662457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.662688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.662697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.662853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.662862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.662964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.662973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.663990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.663999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.664158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.664167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.664351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.664360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.664522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.664530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.664684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.664693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.664869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.664878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.665072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.665082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.665162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.665170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.665346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.665355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.665430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.665438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.665518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.665526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.665745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.665753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.666004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.666013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.666261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.666270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.666370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.666379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.666621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.666630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.666786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.666797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.666880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.666888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.667071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.598 [2024-07-15 20:27:39.667081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.598 qpair failed and we were unable to recover it. 00:29:14.598 [2024-07-15 20:27:39.667308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.667319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.667530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.667539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.667624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.667632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.667712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.667721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.667892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.667902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.668979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.668987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.669171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.669328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.669488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.669666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.669841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.669946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.669958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.599 [2024-07-15 20:27:39.670011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.599 [2024-07-15 20:27:39.670031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.599 [2024-07-15 20:27:39.670050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.599 [2024-07-15 20:27:39.670065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.599 [2024-07-15 20:27:39.670126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.670136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.670145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:14.599 [2024-07-15 20:27:39.670296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.670305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.670227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:14.599 [2024-07-15 20:27:39.670232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:14.599 [2024-07-15 20:27:39.670188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:14.599 [2024-07-15 20:27:39.670524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.670534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.670731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.670740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.670835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.670844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.671933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.671941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.672105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.672120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.672345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.672355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.599 qpair failed and we were unable to recover it. 00:29:14.599 [2024-07-15 20:27:39.672601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.599 [2024-07-15 20:27:39.672611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.672709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.672717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.672817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.672826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.673941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.673949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.674030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.674038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.674184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.674192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.674345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.674355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.674518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.674528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.674747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.674756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.674904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.674915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.675081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.675090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.675237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.675246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.675375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.675396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.675516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.675531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.675698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.675712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.675880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.675894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.676926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.676935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.677105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.677115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.677273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.677283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.677445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.677455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.677661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.677670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.677817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.677827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.678010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.678019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.678266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.678276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.678428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.678437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.678691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.678700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.678962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.678971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.679066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.679074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.679175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.600 [2024-07-15 20:27:39.679183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.600 qpair failed and we were unable to recover it. 00:29:14.600 [2024-07-15 20:27:39.679331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.679339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.679543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.679561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.679743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.679757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.679865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.679879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.680982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.680991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.681098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.681107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.681269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.681279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.681416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.681425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.681524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.681535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.681713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.681723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.681911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.681920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.682096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.682105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.682258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.682268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.682522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.682532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.682785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.682794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.682980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.682989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.683178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.683188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.683358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.683367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.683553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.683563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.683722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.683731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.683897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.683906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.684880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.685113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.685123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.685226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.685235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.685437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.685447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.685613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.685622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.685773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.685942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.685952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.686055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.601 [2024-07-15 20:27:39.686064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.601 qpair failed and we were unable to recover it. 00:29:14.601 [2024-07-15 20:27:39.686232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.686243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.686333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.686342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.686438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.686447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.686598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.686607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.686771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.686780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.686877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.686887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.687889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.687898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.688062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.688072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.688264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.688274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.688446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.688455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.688608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.688618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.688861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.688870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.689990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.689999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.690151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.690160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.690378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.690387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.690483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.690492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.690663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.690672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.690823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.690832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.690997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.691178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.691281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.691392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.691650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.691760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.691938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.691947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.692131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.692141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.692315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.692324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.692495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.692504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.602 [2024-07-15 20:27:39.692592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.602 [2024-07-15 20:27:39.692604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.602 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.692692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.692701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.692804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.692813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.692921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.692929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.693074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.693083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.693230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.693240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.693510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.693520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.693600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.693609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.693845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.693855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.694072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.694082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.694230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.694239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.694415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.694425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.694646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.694655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.694762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.694771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.694964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.694974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.695200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.695210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.695301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.695311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.695471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.695481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.695741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.695751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.695848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.695858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.696935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.696945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.697055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.697064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.697337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.697349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.697511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.697520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.697739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.697749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.697841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.697851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.698071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.698081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.698233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.698243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.698465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.698475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.698588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.698597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.603 [2024-07-15 20:27:39.698814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.603 [2024-07-15 20:27:39.698823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.603 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.698973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.698983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.699205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.699214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.699392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.699402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.699502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.699515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.699664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.699673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.699894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.699903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.700066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.700075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.700293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.700303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.700477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.700487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.700591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.700600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.700762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.700771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.700934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.700943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.701963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.701972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.702110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.702119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.702287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.702296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.702446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.702456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.702571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.702580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.702769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.702778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.702879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.702888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.703056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.703066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.703316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.703326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.703406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.703415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.703659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.703668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.703760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.703769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.703867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.703877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.704989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.704998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.705220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.705230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.604 qpair failed and we were unable to recover it. 00:29:14.604 [2024-07-15 20:27:39.705478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.604 [2024-07-15 20:27:39.705488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.705665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.705674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.705872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.705882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.705986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.705995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.706251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.706267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.706484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.706494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.706711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.706720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.706820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.706829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.706976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.706985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.707226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.707236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.707392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.707402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.707624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.707634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.707796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.707805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.707890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.707899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.708138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.708147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.708307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.708317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.708464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.708474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.708652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.708660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.708880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.708889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.708988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.708997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.709168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.709176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.709394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.709404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.709623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.709633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.709730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.709739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.709910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.709919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.710967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.710976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.711137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.711146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.711293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.711303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.711465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.711474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.711626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.711635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.711731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.711740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.711921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.711931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.712010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.712020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.712116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.712125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.712310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.605 [2024-07-15 20:27:39.712321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.605 qpair failed and we were unable to recover it. 00:29:14.605 [2024-07-15 20:27:39.712488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.712497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.712659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.712668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.712821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.712830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.713083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.713094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.713261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.713270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.713421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.713430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.713706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.713715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.713879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.713888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.714037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.714046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.714198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.714208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.714360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.714369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.714467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.714476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.714648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.714656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.714900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.714910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.715149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.715158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.715324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.715333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.715500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.715509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.715622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.715631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.715814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.715822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.716065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.716229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.716239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.716397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.716406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.716624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.716633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.716830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.716839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.717000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.717009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.717161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.717170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.717351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.717361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.717534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.717543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.717693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.717703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.717924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.717933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.718182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.718191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.718406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.718416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.718617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.718625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.718854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.718862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.719032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.719041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.719192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.719200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.719420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.719429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.719697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.719705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.719854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.719863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.720014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.720022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.720196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.606 [2024-07-15 20:27:39.720204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.606 qpair failed and we were unable to recover it. 00:29:14.606 [2024-07-15 20:27:39.720374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.720383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.720536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.720544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.720638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.720648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.720867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.720876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.721935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.721944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.722103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.722112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.722271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.722280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.722451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.722460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.722566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.722574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.722748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.722756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.722994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.723003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.723115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.723124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.723285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.723295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.723488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.723497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.723695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.723704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.723854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.723863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.724049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.724057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.724231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.724240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.724398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.724407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.724645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.724654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.724818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.724827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.725906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.725915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.726085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.726094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.726185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.726194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.726358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.726366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.726454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.726463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.726612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.726621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.607 [2024-07-15 20:27:39.726804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.607 [2024-07-15 20:27:39.726813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.607 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.727061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.727069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.727232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.727242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.727457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.727466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.727621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.727629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.727796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.727805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.727965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.727974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.728072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.728080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.728304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.728313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.728459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.728467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.728651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.728660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.728820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.728828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.729004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.729013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.729149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.729157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.729309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.729318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.729410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.729419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.729600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.729609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.729768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.729777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.730032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.730041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.730148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.730156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.730393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.730403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.730587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.730596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.730812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.730822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.730915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.730925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.731043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.731053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.731156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.731165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.731355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.731364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.731608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.731617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.731711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.731720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.731913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.731923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.732071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.732080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.732313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.732323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.732488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.732497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.732655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.732664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.732775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.732783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.732890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.608 [2024-07-15 20:27:39.732898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.608 qpair failed and we were unable to recover it. 00:29:14.608 [2024-07-15 20:27:39.733131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.733140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.733358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.733367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.733470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.733479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.733631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.733640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.733874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.733884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.734046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.734056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.734273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.734286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.734477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.734486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.734638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.734647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.734891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.734900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.735083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.735092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.735267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.735277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.735426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.735436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.735633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.735642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.735790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.735800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.735912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.735922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.736984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.736994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.737266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.737276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.737370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.737380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.737545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.737555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.737774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.737783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.738017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.738027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.738273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.738283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.738451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.738460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.738555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.738564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.738738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.738747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.738935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.738944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.739110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.739120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.739205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.739214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.739463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.739472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.739623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.739633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.739722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.739731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.739892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.739901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.740090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.740099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.609 [2024-07-15 20:27:39.740212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.609 [2024-07-15 20:27:39.740221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.609 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.740427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.740437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.740533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.740542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.740655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.740664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.740810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.740819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.740974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.740983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.741066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.741077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.741259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.741269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.741459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.741468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.741552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.741561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.741726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.741735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.741970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.741980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.742094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.742103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.742273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.742283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.742388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.742397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.742644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.742654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.742836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.742845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.743080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.743089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.743261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.743270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.743434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.743443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.743527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.743536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.743777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.743786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.743967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.743976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.744961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.744970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.745132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.745141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.745310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.745320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.745400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.745409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.745585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.745595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.745686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.745694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.745935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.745946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.746122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.746132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.746238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.746247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.746404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.746414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.746516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.746525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.746746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.746755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.610 [2024-07-15 20:27:39.746865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.610 [2024-07-15 20:27:39.746874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.610 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.747033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.747042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.747286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.747296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.747465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.747475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.747722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.747732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.747891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.747903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.748134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.748143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.748309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.748318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.748479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.748488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.748643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.748652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.748758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.748767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.748942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.748951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.749104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.749114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.749386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.749396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.749513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.749522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.749684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.749694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.749858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.749867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.750935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.750944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.751094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.751102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.751299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.751309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.751411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.751420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.751582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.751591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.751754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.751763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.751871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.751880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.752853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.752861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.753106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.753115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.753279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.753289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.753436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.753445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.753605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.753613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.611 [2024-07-15 20:27:39.753773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.611 [2024-07-15 20:27:39.753782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.611 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.753950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.753959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.754142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.754150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.754298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.754307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.754542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.754553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.754724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.754733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.754889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.754898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.754974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.754983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.755833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.755842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.756027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.756036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.756282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.756291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.756457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.756466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.756638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.756647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.756739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.756748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.756948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.756957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.757108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.757117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.757363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.757373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.757595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.757604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.757763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.757772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.757953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.757961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.758045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.758054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.758287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.758296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.758515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.758524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.758636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.758645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.758795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.758803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.758956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.758964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.759047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.759055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.759154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.759163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.759467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.612 [2024-07-15 20:27:39.759476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.612 qpair failed and we were unable to recover it. 00:29:14.612 [2024-07-15 20:27:39.759645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.759654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.759816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.759825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.759990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.759999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.760150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.760159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.760250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.760264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.760449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.760457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.760558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.760567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.760815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.760824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.760903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.760911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.761156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.761166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.761414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.761422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.761615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.761624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.761785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.761794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.762029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.762037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.762187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.762196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.762308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.762317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.762547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.762556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.762641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.762650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.762897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.762906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.763082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.763090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.763170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.763179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.763344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.763353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.763519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.763528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.763650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.763659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.763828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.763836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.764084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.764093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.764262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.764271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.764431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.764440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.764613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.764622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.764706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.764715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.764883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.764892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.765844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.765995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.766004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.766199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.766208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.613 qpair failed and we were unable to recover it. 00:29:14.613 [2024-07-15 20:27:39.766301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.613 [2024-07-15 20:27:39.766310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.766530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.766539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.766784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.766793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.767011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.767020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.767111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.767120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.767286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.767295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.767516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.767524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.767746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.767754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.767843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.767851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.768939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.768947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.769946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.769955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.770149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.770158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.770274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.770283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.770450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.770459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.770608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.770617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.770800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.770808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.770921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.770929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.771192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.771200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.771350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.771359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.771546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.771555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.771827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.771836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.772014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.772022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.772219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.772228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.772390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.772399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.772603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.772612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.772800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.772809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.772957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.772965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.773110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.773119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.773269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.614 [2024-07-15 20:27:39.773278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.614 qpair failed and we were unable to recover it. 00:29:14.614 [2024-07-15 20:27:39.773468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.773477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.773624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.773633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.773875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.773883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.774103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.774112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.774381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.774390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.774482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.774490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.774586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.774595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.774787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.774796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.774891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.774899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.775063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.775072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.775261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.775270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.775424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.775433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.775594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.775602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.775855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.775863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.776041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.776049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.776280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.776289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.776527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.776535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.776853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.776861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.776998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.777006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.777223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.777231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.777392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.777401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.777557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.777566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.777731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.777740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.777935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.777944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.778163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.778172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.778261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.778270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.778440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.778448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.778600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.778609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.778707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.778716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.778929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.778937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.779084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.779092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.779315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.779325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.779423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.779432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.779593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.779602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.779847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.779857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.780036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.780046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.780143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.780152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.780332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.780341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.780505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.780515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.780752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.780760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.780975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.780984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.781084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.781093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.781316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.781326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.615 qpair failed and we were unable to recover it. 00:29:14.615 [2024-07-15 20:27:39.781545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.615 [2024-07-15 20:27:39.781554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.781715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.781724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.781941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.781949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.782194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.782202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.782474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.782483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.782585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.782594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.782675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.782683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.782931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.782940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.783057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.783066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.783220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.783229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.783330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.783339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.783558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.783567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.783824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.783832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.783941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.783949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.784982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.784991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.785215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.785223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.785374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.785384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.785657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.785666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.785749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.785757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.785913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.785921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.786069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.786077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.786299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.786308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.786424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.786433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.786600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.786609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.786759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.786768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.786952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.786961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.787085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.787096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.787206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.787214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.787318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.787327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.787574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.787583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.787752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.787761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.787968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.616 [2024-07-15 20:27:39.787976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.616 qpair failed and we were unable to recover it. 00:29:14.616 [2024-07-15 20:27:39.788085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.788094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.788173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.788181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.788346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.788355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.788537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.788546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.788788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.788796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.789890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.789899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.790089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.790097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.790252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.790276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.790495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.790504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.790665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.790674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.790755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.790764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.790882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.790890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.791919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.791928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.792020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.792029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.792246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.792257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.792433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.792442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.792543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.792551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.792744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.792753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.792825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.792834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.793967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.793975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.794066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.794075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.794228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.794237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.794383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.794392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.794499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.794508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.794773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.794782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.794973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.617 [2024-07-15 20:27:39.794981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.617 qpair failed and we were unable to recover it. 00:29:14.617 [2024-07-15 20:27:39.795158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.795166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.795387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.795396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.795514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.795523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.795674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.795683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.795868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.795877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.796963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.796971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.797141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.797150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.797367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.797376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.797471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.797480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.797645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.797653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.797874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.797882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.797965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.797974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.798071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.798080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.798233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.798242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.798520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.798529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.798609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.798617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.798765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.798773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.799036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.799122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.799306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.799533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.799640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.799838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.799992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.800001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.800219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.800230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.800381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.800391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.800567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.800576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.800722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.800730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.800891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.801048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.801056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.801274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.801283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.801532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.801541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.801785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.801794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.801945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.801955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.802113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.802121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.802284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.802293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.802448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.802456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.802551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.802560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.618 qpair failed and we were unable to recover it. 00:29:14.618 [2024-07-15 20:27:39.802712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.618 [2024-07-15 20:27:39.802721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.802963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.802971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.803121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.803130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.803226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.803235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.803399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.803408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.803637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.803645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.803756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.803765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.803961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.803969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.804137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.804145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.804331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.804340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.804449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.804458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.804620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.804629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.804725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.804733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.804910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.804919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.805138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.805146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.805248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.805261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.805523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.805532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.805678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.805686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.805785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.805794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.805889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.805897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.806114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.806123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.806228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.806237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.806406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.806415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.806656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.806665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.806775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.806784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.806865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.806874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.807026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.807036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.807137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.807146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.807338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.807607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.807615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.807850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.807858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.808004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.808013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.808097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.808106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.808260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.808269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.808455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.808463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.808716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.808725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.808893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.808902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.809905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.809914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.810141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.619 [2024-07-15 20:27:39.810150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.619 qpair failed and we were unable to recover it. 00:29:14.619 [2024-07-15 20:27:39.810259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.810268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.810488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.810497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.810648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.810657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.810819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.810828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.811955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.811963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.812945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.812953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.813144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.813153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.813321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.813330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.813603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.813612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.813775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.813786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.813888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.813897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.814062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.814071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.814221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.814382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.814391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.814483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.814492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.814708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.814717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.814825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.814834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.815082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.815091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.815335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.815344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.815513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.815522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.815765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.815773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.815991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.816000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.816080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.816089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.816205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.816214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.816317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.816326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.620 [2024-07-15 20:27:39.816543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.620 [2024-07-15 20:27:39.816552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.620 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.816715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.816724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.816888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.816897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.817003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.817011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.817266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.817275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.817428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.817437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.817589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.817597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.817683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.817692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.817851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.817860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.818987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.818995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.819160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.819169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.819330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.819339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.819435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.819443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.819611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.819620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.819790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.819798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.819971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.819980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.820129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.820138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.820288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.820297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.820478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.820490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.820645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.820655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.820863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.820872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.821117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.821125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.821239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.821247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.821421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.821430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.821596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.821605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.821763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.821771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.821976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.821985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.822173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.822182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.822262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.822270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.822386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.822395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.822609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.822618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.822834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.822843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.823076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.823085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.823261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.823270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.823440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.823449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.823696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.621 [2024-07-15 20:27:39.823705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.621 qpair failed and we were unable to recover it. 00:29:14.621 [2024-07-15 20:27:39.823862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.823872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.824053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.824062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.824236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.824245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.824402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.824411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.824630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.824639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.824747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.824756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.824837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.824845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.825012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.825021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.825177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.825185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.825406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.825415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.825609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.825618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.825790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.825799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.825916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.825924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.826783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.826792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.827966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.827974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.828125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.828134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.828306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.828315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.828478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.828486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.828777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.828785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.828933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.828942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.829976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.829985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.830197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.830206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.622 [2024-07-15 20:27:39.830379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.622 [2024-07-15 20:27:39.830388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.622 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.830577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.830585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.830757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.830766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.830848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.830857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.831078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.831086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.831181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.831190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.831359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.831369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.831540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.831549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.831658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.831666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.831853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.831883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.832965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.832979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.833159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.833173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.833452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.833463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.833613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.833622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.833851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.833859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.834985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.834993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.835090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.835099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.835239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.835247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.835434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.835443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.835598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.835607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.835716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.835724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.835820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.835829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.836834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.836842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.837082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.837091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.837319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.837328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.837497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.837506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.837725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.623 [2024-07-15 20:27:39.837734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.623 qpair failed and we were unable to recover it. 00:29:14.623 [2024-07-15 20:27:39.837925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.837934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.838081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.838090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.838190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.838199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.838361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.838370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.838595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.838604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.838796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.838805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.838890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.838899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.839141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.839149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.839401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.839410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.839517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.839526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.839757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.839766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.840826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.840836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.841978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.841987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.842072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.842081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.842175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.842184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.842356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.842365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.842553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.842562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.842643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.842652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.842815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.842824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.843022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.843030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.624 qpair failed and we were unable to recover it. 00:29:14.624 [2024-07-15 20:27:39.843177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.624 [2024-07-15 20:27:39.843186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.843346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.843355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.843445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.843454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.843606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.843615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.843829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.843837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.843985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.843993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.844176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.844185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.844367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.844376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.844468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.844477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.844569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.844578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.844758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.844766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.844874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.844883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.845075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.845084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.845188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.845197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.845281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.845290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.845437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.845446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.845662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.845671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.845890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.845899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.846068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.846077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.846247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.846259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.846407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.846416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.846628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.846637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.846830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.846838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.846987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.846995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.847239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.847249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.847422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.847431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.847675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.847684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.847902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.847911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.848125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.848133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.848327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.848336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.848487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.848496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.625 qpair failed and we were unable to recover it. 00:29:14.625 [2024-07-15 20:27:39.848716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.625 [2024-07-15 20:27:39.848724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.848823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.848831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.849937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.849946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.850124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.850133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.850237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.850246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.850360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.850369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.850612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.850620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.850739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.850747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.850897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.850905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.851127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.851136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.851304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.851313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.851465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.851474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.851745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.851753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.851970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.851978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.852199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.852208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.852323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.852332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.852573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.852582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.852751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.852759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.852980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.852989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.853148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.853157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.853304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.853313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.853465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.853474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.853624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.853632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.853791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.853799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.854042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.854051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.854299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.854308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.854492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.854501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.854591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.854602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.854781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.854789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.854884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.854893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.855054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.855063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.855211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.855220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.855314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.855323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.855490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.855498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.855686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.626 [2024-07-15 20:27:39.855695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.626 qpair failed and we were unable to recover it. 00:29:14.626 [2024-07-15 20:27:39.855913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.855921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.856087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.856095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.856273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.856283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.856502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.856511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.856609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.856617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.856714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.856723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.856961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.856970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.857134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.857143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.857241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.857250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.857353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.857362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.857607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.857616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.857743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.857752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.857941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.857949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.858988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.858996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.859189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.859197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.859439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.859448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.859706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.859715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.859860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.859869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.860925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.860933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.861017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.861026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.861244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.861253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.861419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.861427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.861666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.861675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.861863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.861872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.862040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.862048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.862213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.862222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.862324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.862334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.862485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.862494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.862658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.627 [2024-07-15 20:27:39.862666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.627 qpair failed and we were unable to recover it. 00:29:14.627 [2024-07-15 20:27:39.862826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.862835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.862990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.862998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.863150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.863159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.863261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.863270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.863507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.863516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.863620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.863629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.863806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.863815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.864035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.864043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.864138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.864147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.864366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.864375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.864631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.864640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.864802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.864811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.864978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.864986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.865111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.865121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.865395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.865404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.865553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.865562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.865749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.865758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.865841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.865849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.866991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.866999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.867956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.867964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.868941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.868949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.869104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.869113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.628 [2024-07-15 20:27:39.869267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.628 [2024-07-15 20:27:39.869276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.628 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.869458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.869467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.869578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.869587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.869754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.869763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.869935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.869944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.870100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.870109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.870186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.870195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.870367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.870376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.870619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.870628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.870795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.870804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.870978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.870986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.871206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.871215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.871341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.871351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.871553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.871562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.871807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.871816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.872063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.872072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.872167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.872176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.872393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.872402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.872564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.872574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.872761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.872770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.872858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.872867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.873099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.873107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.873328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.873337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.873580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.873589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.873716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.873724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.873952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.873961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.874904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.874913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.875136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.629 [2024-07-15 20:27:39.875145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.629 qpair failed and we were unable to recover it. 00:29:14.629 [2024-07-15 20:27:39.875259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.875268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.875356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.875364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.875469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.875478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.875651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.875659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.875746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.875754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.875975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.875984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.876199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.876207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.876307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.876316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.876507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.876516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.876676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.876685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.876780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.876788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.876955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.876964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.877923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.877932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.878899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.878907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.879979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.879988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.880154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.880163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.880331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.880341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.880494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.880502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.880765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.880774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.880872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.880881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.881135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.881144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.881305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.881314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.630 [2024-07-15 20:27:39.881403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.630 [2024-07-15 20:27:39.881412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.630 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.881630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.881638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.881828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.881836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.882088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.882344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.882353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.882445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.882454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.882674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.882682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.882937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.882945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.883219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.883227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.883392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.883401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.883477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.883485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.883712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.883721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.883942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.883951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.884201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.884210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.884378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.884387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.884549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.884558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.884729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.884737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.884823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.884832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.885014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.885023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.885242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.885250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.885334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.885344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.885566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.885575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.885816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.885825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.886002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.886010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.886174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.886185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.886427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.886436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.886623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.886631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.886785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.886794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.886979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.886988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.887231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.887239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.887418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.887428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.887519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.887527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.887693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.887702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.887863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.887872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.888032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.888040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.888223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.888231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.888420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.888430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.888540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.888549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.888771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.888780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.888929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.888937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.889098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.631 [2024-07-15 20:27:39.889107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.631 qpair failed and we were unable to recover it. 00:29:14.631 [2024-07-15 20:27:39.889200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.889209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.889288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.889297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.889458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.889467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.889552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.889560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.889809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.889818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.889980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.889988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.890181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.890189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.890274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.890283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.890470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.890479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.890722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.890731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.890903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.890912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.891966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.891975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.892215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.892223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.892396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.892406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.892637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.892645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.892738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.892747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.892942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.892950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.893220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.893231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.893384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.893393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.893549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.893558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.893707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.893715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.893868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.893877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.894055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.894064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.894140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.894149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.894409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.894419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.894582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.894591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.894844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.894853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.894972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.894981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.895135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.895143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.895353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.895362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.895556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.895564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.895735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.895744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.895911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.896087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.896095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.896191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.632 [2024-07-15 20:27:39.896200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.632 qpair failed and we were unable to recover it. 00:29:14.632 [2024-07-15 20:27:39.896456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.896466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.896557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.896565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.896670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.896679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.896933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.896942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.897050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.897059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.897304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.897313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.897431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.897440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.897686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.897695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.897796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.897805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.898012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.898020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.898266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.898275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.898549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.898558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.898842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.898850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.899903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.899911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.900154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.900163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.900349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.900358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.900623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.900636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.900811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.900820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.900919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.900928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.901032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.901042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.901290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.901299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.901446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.901455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.901699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.901708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.901799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.901808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.901894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.901902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.902010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.902019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.902114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.902124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.902212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.902221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.902396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.902405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.633 qpair failed and we were unable to recover it. 00:29:14.633 [2024-07-15 20:27:39.902557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.633 [2024-07-15 20:27:39.902566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.902745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.902754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.902906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.902915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.903007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.903016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.903098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.903106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.903267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.903276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.903441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.903450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.903613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.903622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.903838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.903847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.904920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.904995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.905003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.634 [2024-07-15 20:27:39.905163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.634 [2024-07-15 20:27:39.905172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.634 qpair failed and we were unable to recover it. 00:29:14.928 [2024-07-15 20:27:39.905260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.928 [2024-07-15 20:27:39.905270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.928 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.905398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.905407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.905564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.905574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.905661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.905671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.905840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.905850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.905997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.906903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.906911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.907931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.907940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.908942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.908951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.909069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.909078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.909226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.909235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.909396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.909406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.909628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.909637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.909790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.909798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.909909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.909918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.910002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.910011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.910170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.910179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.910485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.910494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.910665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.910674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.910842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.910850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.910946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.910955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.911125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.911133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.911218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.929 [2024-07-15 20:27:39.911227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.929 qpair failed and we were unable to recover it. 00:29:14.929 [2024-07-15 20:27:39.911317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.911327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.911477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.911486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.911706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.911714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.911794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.911803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.912895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.912904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.913909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.913917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.914945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.914953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.915929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.915938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.916954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.916962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.917117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.917126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.917297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.930 [2024-07-15 20:27:39.917306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.930 qpair failed and we were unable to recover it. 00:29:14.930 [2024-07-15 20:27:39.917418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.917427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.917675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.917683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.917864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.917872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.918089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.918098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.918362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.918371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.918488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.918496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.918717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.918727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.918952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.918961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.919064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.919292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.919470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.919656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.919759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.919918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.919997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.920006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.920163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.920172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.920360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.920368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.920532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.920541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.920687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.920696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.920788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.920797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.921869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.921877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.922124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.922133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.922312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.922322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.922559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.922569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.922785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.922794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.923062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.923070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.923219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.923228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.923538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.923547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.923647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.923656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.923825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.923834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.924056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.924065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.924232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.924241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.924423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.924432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.924582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.931 [2024-07-15 20:27:39.924590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.931 qpair failed and we were unable to recover it. 00:29:14.931 [2024-07-15 20:27:39.924681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.924690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.924769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.924778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.924948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.924957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.925103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.925111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.925355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.925364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.925534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.925545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.925741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.925750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.925966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.925975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.926193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.926202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.926275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.926284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.926444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.926452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.926701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.926710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.926874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.926883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.927127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.927135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.927304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.927314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.927506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.927515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.927688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.927696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.927795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.927804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.927953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.927961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.928066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.928075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.928223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.928232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.928327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.928336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.928514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.928523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.928685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.928694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.928843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.928851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.929001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.929010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.929156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.929165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.929339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.929349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.929558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.929566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.929728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.929737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.929926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.929935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.930181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.930190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.930284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.930294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.930391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.930400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.930520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.930529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.930677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.930686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.930854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.930862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.931058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.931067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.931169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.931178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.931274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.931283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.932 [2024-07-15 20:27:39.931451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.932 [2024-07-15 20:27:39.931459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.932 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.931777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.931785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.931947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.931956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.932124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.932132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.932290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.932299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.932472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.932484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.932573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.932581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.932728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.932736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.932906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.932915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.933077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.933086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.933251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.933264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.933510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.933518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.933670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.933679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.933832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.933840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.934986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.934995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.935159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.935168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.935406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.935415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.935564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.935572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.935681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.935689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.935784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.935793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.935882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.935890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.936132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.936141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.936289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.936298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.936459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.936468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.936619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.936628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.936800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.936808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.933 [2024-07-15 20:27:39.936976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.933 [2024-07-15 20:27:39.936985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.933 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.937246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.937259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.937377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.937386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.937479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.937488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.937672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.937681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.937843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.937851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.938121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.938246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.938259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.938457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.938465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.938616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.938625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.938868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.938877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.939970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.939979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.940084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.940093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.940262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.940272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.940416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.940425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.940485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.940494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.940667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.940675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.940913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.940922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.941834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.941843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.942854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.942862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.943025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.943034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.943257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.943266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.943453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.943462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.934 qpair failed and we were unable to recover it. 00:29:14.934 [2024-07-15 20:27:39.943617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.934 [2024-07-15 20:27:39.943625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.943796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.943805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.943888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.943897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.943995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.944149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.944327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.944512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.944682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.944855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.944973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.944982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.945197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.945206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.945352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.945361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.945511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.945520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.945668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.945677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.945850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.945858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.945973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.945982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.946217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.946226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.946468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.946477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.946635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.946644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.946810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.946818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.946987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.946996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.947182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.947190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.947466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.947475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.947722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.947731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.947977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.947985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.948080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.948088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.948258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.948267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.948421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.948430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.948603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.948612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.948747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.948755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.948863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.948872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.949087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.949096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.949181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.949189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.949406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.949416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.949563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.949572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.949731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.949741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.949916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.949924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.950020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.950029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.950126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.950136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.950285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.950295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.950484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.935 [2024-07-15 20:27:39.950492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.935 qpair failed and we were unable to recover it. 00:29:14.935 [2024-07-15 20:27:39.950589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.950597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.950750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.950758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.950928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.950936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.951120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.951129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.951297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.951306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.951468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.951477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.951556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.951564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.951719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.951728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.951892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.951901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.952140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.952149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.952298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.952308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.952499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.952508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.952732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.952740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.952823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.952831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.953080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.953089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.953236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.953244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.953438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.953447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.953601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.953609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.953761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.953769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.953971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.953980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.954201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.954210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.954296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.954306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.954525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.954534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.954644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.954654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.954742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.954750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.954970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.954979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.955146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.955155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.955321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.955330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.955414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.955423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.955644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.955654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.955855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.955864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.955938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.955947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.956093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.956102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.956321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.956330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.956551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.956561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.956749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.956758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.956845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.956853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.957026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.957037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.957148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.957157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.957406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.957415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.957660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.936 [2024-07-15 20:27:39.957669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.936 qpair failed and we were unable to recover it. 00:29:14.936 [2024-07-15 20:27:39.957821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.957830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.957908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.957916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.958110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.958118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.958286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.958295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.958517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.958525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.958674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.958683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.958902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.958910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.959193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.959202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.959312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.959321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.959426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.959435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.959660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.959669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.959770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.959779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.960047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.960056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.960206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.960214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.960485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.960495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.960587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.960596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.960698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.960706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.960873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.960882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.961068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.961077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.961176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.961185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.961272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.961282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.961450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.961458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.961652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.961662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.961835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.961844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.962003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.962011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.962174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.962184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.962290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.962299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.962478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.962686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.962694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.962931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.962940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.963219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.963227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.963319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.963328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.963502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.963511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.963681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.963689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.963908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.963917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.937 qpair failed and we were unable to recover it. 00:29:14.937 [2024-07-15 20:27:39.964133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.937 [2024-07-15 20:27:39.964141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.964359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.964370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.964576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.964585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.964833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.964842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.964950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.964958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.965140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.965149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.965366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.965375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.965546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.965555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.965735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.965744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.965901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.965910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.966077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.966086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.966273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.966282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.966431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.966440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.966515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.966524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.966763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.966771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.966953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.966962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.967078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.967087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.967190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.967199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.967419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.967428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.967643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.967652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.967805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.967814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.968029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.968038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.968223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.968232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.968311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.968320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.968569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.968578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.968802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.968811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.969941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.969950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.970171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.970179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.970270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.970279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.970529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.970538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.970689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.970698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.970799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.970808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.970904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.970913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.971199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.971207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.938 qpair failed and we were unable to recover it. 00:29:14.938 [2024-07-15 20:27:39.971383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.938 [2024-07-15 20:27:39.971394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.971576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.971585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.971684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.971694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.971795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.971804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.971916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.971924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.972073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.972082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.972266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.972276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.972429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.972438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.972602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.972610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.972759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.972768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.972923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.972932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.973155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.973164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.973244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.973252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.973407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.973416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.973731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.973740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.973846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.973854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.974895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.974904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.975969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.975978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.976164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.976173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.976269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.976279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.976440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.976449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.976656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.976664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.976856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.976865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.977053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.977061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.977239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.977248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.977431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.977440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.977623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.977631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.977874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.977883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.978127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.978138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.939 qpair failed and we were unable to recover it. 00:29:14.939 [2024-07-15 20:27:39.978240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.939 [2024-07-15 20:27:39.978249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.978402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.978411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.978503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.978512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.978701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.978710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.978817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.978826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.978972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.978981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.979167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.979176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.979327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.979336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.979456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.979465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.979685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.979693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.979857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.979866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.980018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.980026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.980268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.980277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.980505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.980514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.980686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.980694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.980846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.980855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.980932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.980941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.981122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.981130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.981296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.981305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.981475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.981483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.981657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.981666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.981779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.981788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.981956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.981964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.982954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.982963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.983981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.983990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.984085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.984093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.984275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.984284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.984464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.984475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.984649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.984658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.940 qpair failed and we were unable to recover it. 00:29:14.940 [2024-07-15 20:27:39.984815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.940 [2024-07-15 20:27:39.984824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.984986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.984994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.985144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.985153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.985236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.985245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.985496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.985505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.985677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.985686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.985783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.985791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.985942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.985951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.986056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.986065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.986290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.986299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.986501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.986510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.986728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.986737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.986820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.986830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.986987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.986996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.987147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.987156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.987374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.987383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.987551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.987559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.987722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.987730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.987822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.987831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.987911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.987920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.988972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.988980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.941 [2024-07-15 20:27:39.989858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.941 qpair failed and we were unable to recover it. 00:29:14.941 [2024-07-15 20:27:39.989963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.989971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.990108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.990117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.990295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.990306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.990552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.990561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.990806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.990814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.991003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.991011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.991263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.991271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.991514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.991523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.991724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.991732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.991928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.991936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.992092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.992101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.992273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.992282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.992500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.992509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.992619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.992628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.992888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.992897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.993058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.993162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.993270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.993390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.993633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.993829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.993991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.994101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.994189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.994422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.994673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.994796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.994966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.994974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.995149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.995158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.995258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.995267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.995422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.995432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.995675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.995683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.995871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.995879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.995998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.996169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.996360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.996453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.996680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.996867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.996977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.942 [2024-07-15 20:27:39.996985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.942 qpair failed and we were unable to recover it. 00:29:14.942 [2024-07-15 20:27:39.997175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.997272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.997375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.997521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.997676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.997777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.997875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.997883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.998125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.998134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.998303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.998312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.998408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.998417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.998567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.998575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.998723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.998732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.998837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.998845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:39.999928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:39.999936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.000850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.000859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.001025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.001034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.001216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.001225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.001502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.001512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.001610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.001618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.001785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.001795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.001972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.001981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.002985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.943 [2024-07-15 20:27:40.002993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.943 qpair failed and we were unable to recover it. 00:29:14.943 [2024-07-15 20:27:40.003128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.003137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.003329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.003341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.003515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.003524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.003635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.003644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.003886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.003895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.004115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.004124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.004242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.004251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.004350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.004359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.004577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.004586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.004738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.004747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.004898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.004907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.005970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.005979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.006988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.006997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.007075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.007084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.007235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.007330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.007339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.007615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.007624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.007820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.007828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.008899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.009085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.009094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.009364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.009374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.009495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.944 [2024-07-15 20:27:40.009504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.944 qpair failed and we were unable to recover it. 00:29:14.944 [2024-07-15 20:27:40.009588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.009599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.009710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.009719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.009896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.009905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.009972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.009981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.010150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.010159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.010292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.010301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.010461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.010470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.010559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.010568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.010755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.010764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.010858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.010867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.011912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.011921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.012119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.012129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.012325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.012335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.012504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.012514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.012754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.012764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.012854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.012864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.012977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.012988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.013091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.013102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.013194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.013204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.013325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.013336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.013529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.013540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.013678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.013689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.013863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.013873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.014109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.014120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.014243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.014259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.014363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.014374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.014515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.014527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.014666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.014679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.014864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.014879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.015006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.015021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.015234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.015246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.015401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.015414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.015534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.015550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.945 [2024-07-15 20:27:40.015654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.945 [2024-07-15 20:27:40.015666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.945 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.015891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.015904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.016881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.016891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.017056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.017067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.017236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.017246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.017358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.017372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.017492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.017501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.017758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.017767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.017918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.017927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.018877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.018885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.019148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.019157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.019258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.019267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.019495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.019504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.019739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.019748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.019916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.019924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.020944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.020953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.021102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.021111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.021278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.021288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.021391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.946 [2024-07-15 20:27:40.021400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.946 qpair failed and we were unable to recover it. 00:29:14.946 [2024-07-15 20:27:40.021644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.021653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.021807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.021816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.021977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.021986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.022948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.022957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.023054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.023063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.023307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.023316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.023468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.023477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.023576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.023585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.023689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.023698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.023926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.024106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.024115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.024315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.024325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.024480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.024489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.024582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.024591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.024783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.024791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.024881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.024889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.025135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.025143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.025385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.025394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.025564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.025573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.025729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.025738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.025851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.025860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.026028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.026136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.026264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.026363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.026526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.026720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.026996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.027935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.947 [2024-07-15 20:27:40.027944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.947 qpair failed and we were unable to recover it. 00:29:14.947 [2024-07-15 20:27:40.028041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.028050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.028217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.028226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.028402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.028412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.028514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.028523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.028680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.028689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.028933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.028942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.029035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.029044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.029297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.029306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.029393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.029403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.029647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.029655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.029823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.029832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.029978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.029987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.030094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.030103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.030320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.030330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.030501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.030509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.030661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.030671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.030889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.030898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.031006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.031014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.031191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.031200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.031475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.031484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.031757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.031766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.031934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.031943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.032984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.032993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.033209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.033217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.033387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.033398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.033643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.033652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.033821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.033830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.034859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.034868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.948 qpair failed and we were unable to recover it. 00:29:14.948 [2024-07-15 20:27:40.035030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.948 [2024-07-15 20:27:40.035039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.035124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.035133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.035288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.035298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.035410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.035419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.035573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.035582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.035827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.035836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.035963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.035972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.036147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.036156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.036253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.036266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.036445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.036454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.036627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.036636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.036801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.036809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.036903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.036912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.037844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.037853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.038982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.038991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.039158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.039167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.039388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.039397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.039564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.039575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.039796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.039805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.039913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.039922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.040153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.040161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.040325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.040334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.040416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.040425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.040578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.040587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.040753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.040762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.040851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.040860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.949 [2024-07-15 20:27:40.041011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.949 [2024-07-15 20:27:40.041020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.949 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.041205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.041214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.041303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.041312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.041479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.041487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.041731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.041740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.041914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.041923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.042920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.042929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.043094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.043103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.043264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.043273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.043364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.043373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.043619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.043628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.043797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.043806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.043996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.044005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.044172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.044180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.044369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.044378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.044486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.044495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.044645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.044654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.044920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.044929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.045022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.045031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.045204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.045212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.045429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.045438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.045606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.045614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.045761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.045769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.045931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.045940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.046161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.046170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.046264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.046275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.046376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.046386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.950 qpair failed and we were unable to recover it. 00:29:14.950 [2024-07-15 20:27:40.046537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.950 [2024-07-15 20:27:40.046545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.046634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.046643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.046738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.046746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.046910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.046919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.047164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.047173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.047356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.047365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.047476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.047485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.047578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.047587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.047693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.047702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.047955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.047963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.048180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.048188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.048267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.048277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.048439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.048448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.048601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.048610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.048723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.048732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.049884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.049892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.050045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.050054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.050200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.050209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.050429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.050438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.050607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.050615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.050807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.050815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.050980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.050989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.051102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.051111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.051209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.051218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.051392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.051402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.051570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.051579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.051744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.051753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.051969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.051978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.052960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.951 [2024-07-15 20:27:40.052969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.951 qpair failed and we were unable to recover it. 00:29:14.951 [2024-07-15 20:27:40.053124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.053134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.053211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.053220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.053387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.053396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.053547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.053556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.053660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.053668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.053848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.053856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.054005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.054014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.054178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.054186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.054458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.054467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.054636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.054645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.054761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.054770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.054878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.054887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.055927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.055936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.056919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.056928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.057025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.057195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.057289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.057465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.057591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.057750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.057998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.058167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.058268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.058427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.058585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.058675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.058804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.058813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.059058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.059067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.952 [2024-07-15 20:27:40.059226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.952 [2024-07-15 20:27:40.059235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.952 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.059470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.059480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.059678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.059687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.059769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.059778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.059955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.059964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.060121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.060239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.060400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.060564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.060668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.060840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.060996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.061005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.061114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.061123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.061352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.061361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.061529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.061538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.061731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.061739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.061852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.061861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.062125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.062134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.062298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.062307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.062493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.062502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.062649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.062658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.062825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.062833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.062943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.062952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.063147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.063155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.063243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.063253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.063368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.063378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.063536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.063544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.063709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.063718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.063871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.063880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.064032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.064041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.064189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.064198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.064287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.064296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.064458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.064467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.064615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.064624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.064868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.064877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.065051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.065060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.065212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.065221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.065338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.065349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.065428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.065437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.065591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.065600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.065846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.065854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.953 [2024-07-15 20:27:40.066140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.953 [2024-07-15 20:27:40.066148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.953 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.066253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.066266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.066367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.066376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.066548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.066557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.066669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.066678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.066858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.066867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.067056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.067065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.067281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.067290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.067445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.067454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.067632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.067641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.067804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.067812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.067915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.067923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.068103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.068111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.068214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.068223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.068386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.068395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.068549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.068558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.068709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.068718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.068955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.068964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.069126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.069135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.069302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.069311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.069475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.069484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.069683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.069692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.069848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.069857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.069954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.069963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.070133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.070141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.070238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.070246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.070345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.070354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.070467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.070475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.070744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.070752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.070863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.070872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.071019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.071028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.071223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.071232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.071387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.071396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.071569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.071578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.071745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.071754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.071985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.954 [2024-07-15 20:27:40.071994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.954 qpair failed and we were unable to recover it. 00:29:14.954 [2024-07-15 20:27:40.072167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.072178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.072427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.072437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.072629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.072638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.072834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.072843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.072958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.072967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.073069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.073175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.073346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.073527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.073668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.073877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.073999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.074109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.074275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.074461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.074679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.074791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.074915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.074926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.075025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.075036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.075137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.075149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.075268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.075280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.075374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.075393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.075512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.075535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.075745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.075761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.076903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.076913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.077944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.077965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.078049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.078059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.078248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.078260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.078425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.955 [2024-07-15 20:27:40.078436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.955 qpair failed and we were unable to recover it. 00:29:14.955 [2024-07-15 20:27:40.078512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.078521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.078614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.078623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.078724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.078733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.078891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.078900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.079971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.079980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.080129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.080137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.080412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.080422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.080535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.080544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.080791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.080800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.080867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.080876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.080984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.080992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.081077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.081085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.081245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.081258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.081423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.081432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.081583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.081592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.081677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.081685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.081853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.081861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.082019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.082028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.082233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.082242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.082414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.082423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.082576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.082585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.082677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.082686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.082787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.082795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.083964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.083973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.084051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.084059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.084264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.084273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.084387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.084396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.084498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.084508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.084725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.956 [2024-07-15 20:27:40.084733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.956 qpair failed and we were unable to recover it. 00:29:14.956 [2024-07-15 20:27:40.084836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.084845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.084994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.085223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.085387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.085642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.085737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.085891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.085989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.085997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.086187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.086196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.086289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.086298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.086378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.086387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.086499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.086508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.086767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.086776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.086927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.086936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.087184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.087193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.087412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.087421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.087533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.087542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.087653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.087661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.087812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.087821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.087974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.087983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.088232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.088241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.088408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.088416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.088635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.088643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.088815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.088823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.089045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.089054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.089242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.089251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.089408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.089417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.089638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.089646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.089809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.089819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.089926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.089935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.090973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.090982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.091169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.091178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.091271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.091283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.091532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.091541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.091693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.957 [2024-07-15 20:27:40.091702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.957 qpair failed and we were unable to recover it. 00:29:14.957 [2024-07-15 20:27:40.091804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.091813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.091990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.091998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.092210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.092408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.092573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.092684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.092793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.092898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.092999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.093161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.093347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.093465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.093630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.093814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.093936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.093945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.094120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.094130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.094246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.094258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.094372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.094381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.094616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.094625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.094776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.094784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.094971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.094979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.095148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.095156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.095285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.095294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.095457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.095465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.095670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.095702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.095933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.095963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.096134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.096149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.096326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.096341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.096426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.096440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.096786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.096801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.097030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.097044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.097215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.097229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.097408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.097423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.097693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.097704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.097893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.097902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.958 [2024-07-15 20:27:40.098072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.958 [2024-07-15 20:27:40.098080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.958 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.098247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.098259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.098422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.098433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.098679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.098687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.098791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.098800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.098897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.098906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.099063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.099072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.099223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.099231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.099394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.099404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.099550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.099559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.099777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.099786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.099950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.099958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.100173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.100182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.100270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.100280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.100433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.100443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.100683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.100692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.100802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.100811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.100920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.100929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.101879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.101887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.102072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.102080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.102304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.102313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.102408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.102417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.102577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.102586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.102692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.102709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.102890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.102904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.103074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.103088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.103189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.103200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.103301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.103310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.103543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.103552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.103832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.103840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.104079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.104087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.104204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.104213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.104377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.959 [2024-07-15 20:27:40.104386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.959 qpair failed and we were unable to recover it. 00:29:14.959 [2024-07-15 20:27:40.104547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.104556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.104753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.104762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.104924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.104932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.105151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.105161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.105411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.105420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.105639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.105647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.105822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.105830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.105993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.106001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.106136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.106145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.106365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.106374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.106542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.106551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.106800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.106809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.107011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.107020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.107108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.107117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.107361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.107370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.107545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.107555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.107729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.107739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.107859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.107868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.108985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.108994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.109209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.109218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.109306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.109316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.109501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.109510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.109661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.109670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.109915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.109925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.110105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.110114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.110272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.110282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.110527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.110536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.110694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.110703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.110850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.110859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.111026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.111036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.111129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.111137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.111261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.111270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.111421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.111430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.111571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.960 [2024-07-15 20:27:40.111580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.960 qpair failed and we were unable to recover it. 00:29:14.960 [2024-07-15 20:27:40.111667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.111676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.111921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.111930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.112036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.112045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.112197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.112206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.112296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.112306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.112575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.112585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.112754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.112762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.112914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.112924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.113905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.114931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.114940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.115183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.115191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.115310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.115319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.115420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.115429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.115510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.115519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.115695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.115704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.115865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.115873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.116051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.116060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.116237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.116248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.116399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.116408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.116558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.116566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.116812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.116820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.116972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.116981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.117081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.117089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.117276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.117285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.117377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.117386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.117603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.117612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.117724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.117733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.117862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.117870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.118117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.961 [2024-07-15 20:27:40.118126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.961 qpair failed and we were unable to recover it. 00:29:14.961 [2024-07-15 20:27:40.118211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.118220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.118325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.118334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.118497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.118506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.118655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.118664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.118810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.118819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.118924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.118932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.119048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.119057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.119228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.119237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.119328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.119337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.119591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.119599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.119677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.119686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.119787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.119796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.120924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.120935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.121029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.121037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.121142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.121151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.121302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.121311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.121564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.121573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.121738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.121747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.121906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.121915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.122926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.122936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.123015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.123023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.123273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.123283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.123453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.123462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.123679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.123688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.123793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.123801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.124087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.124097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.124176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.124185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.124336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.124346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.962 [2024-07-15 20:27:40.124499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.962 [2024-07-15 20:27:40.124508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.962 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.124613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.124622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.124796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.124805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.124908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.124916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.125840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.125848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.126938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.126947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.127050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.127058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.127227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.127237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.127396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.127404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.127586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.127594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.127752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.127760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.127936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.127944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.128089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.128097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.128248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.128261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.128422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.128431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.128528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.128538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.128759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.128772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.129941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.129949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.963 [2024-07-15 20:27:40.130047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.963 [2024-07-15 20:27:40.130057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.963 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.130138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.130146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.130366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.130376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.130623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.130632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.130811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.130820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.130988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.130997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.131160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.131169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.131418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.131427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.131593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.131602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.131870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.131879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.131992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.132088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.132198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.132283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.132468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.132583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.132757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.132766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.133001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.133010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.133201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.133210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.133333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.133342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.133586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.133594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.133781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.133789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.134054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.134063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.134333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.134342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.134425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.134438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.134538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.134546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.134764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.134774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.134916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.134924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.135074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.135082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.135262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.135271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.135411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.135421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.135670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.135680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.135872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.135882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.136916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.136925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.137072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.137082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.964 qpair failed and we were unable to recover it. 00:29:14.964 [2024-07-15 20:27:40.137172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.964 [2024-07-15 20:27:40.137181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.137375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.137385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.137475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.137485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.137648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.137657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.137833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.137843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.138848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.138856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.139870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.139879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.140912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.140922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.141083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.141091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.141263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.141272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.141369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.141379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.141600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.141609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.141765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.141773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.141873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.141885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.142989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.142998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.143064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.143073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.143313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.965 [2024-07-15 20:27:40.143323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.965 qpair failed and we were unable to recover it. 00:29:14.965 [2024-07-15 20:27:40.143553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.143563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.143713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.143722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.143913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.143922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.144015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.144024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.144193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.144202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.144370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.144384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.144487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.144496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.144665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.144674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.144838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.144847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.145909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.145918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.146001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.146009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.146172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.146182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.146333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.146343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.146532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.146541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.146812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.146822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.147981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.147991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.148137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.148146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.148308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.148317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.148499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.148509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.148610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.148619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.148775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.148784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.148894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.148903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.149114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.149123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.149296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.149305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.149406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.149416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.149505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.149514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.149601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.149610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.149856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.149865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.150018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.150027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.966 qpair failed and we were unable to recover it. 00:29:14.966 [2024-07-15 20:27:40.150200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.966 [2024-07-15 20:27:40.150210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.150385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.150394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.150569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.150740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.150750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.150836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.150844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.150997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.151916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.151925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.152040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.152048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.152201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.152210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.152433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.152443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.152663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.152672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.152763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.152775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.152944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.153933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.153942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.154128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.154136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.154237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.154246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.154410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.154420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.154660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.154669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.154914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.154923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.155090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.155098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.155271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.155281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.155455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.155464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.967 [2024-07-15 20:27:40.155665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.967 [2024-07-15 20:27:40.155674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.967 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.155828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.155836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.156837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.156846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.157095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.157103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.157343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.157352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.157495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.157505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.157654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.157663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.157829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.157838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.157913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.157922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.158985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.158994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.159954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.159963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.160147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.160156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.160267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.160276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.160339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.160348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.160529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.160537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.160686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.160698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.160882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.160891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.161130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.161138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.161253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.161266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.161417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.161610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.161619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.161715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.161724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.968 [2024-07-15 20:27:40.161877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.968 [2024-07-15 20:27:40.161886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.968 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.161995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.162003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.162239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.162248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.162440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.162449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.162598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.162733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.162742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.162832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.162840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.163873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.163881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.164985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.164993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.165168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.165177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.165259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.165268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.165365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.165374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.165536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.165545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.165698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.165706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.165802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.165811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.166831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.166842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.167894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.167904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.969 [2024-07-15 20:27:40.168077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.969 [2024-07-15 20:27:40.168086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.969 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.168195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.168204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.168355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.168377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.168550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.168559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.168709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.168719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.168814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.168823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.169078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.169087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.169259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.169268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.169442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.169451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.169647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.169656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.169804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.169813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.170031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.170040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.170190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.170204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.170428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.170438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.170666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.170675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.170772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.170781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.170953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.170963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.171111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.171120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.171300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.171310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.171497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.171506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.171676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.171685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.171831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.171840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.172869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.172878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.173045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.173054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.173215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.173223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.173395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.173405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.173626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.173636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.173823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.173832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.173993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.174002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.174163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.174172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.174450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.174459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.174551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.174560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.174726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.174735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.174894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.174903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.970 [2024-07-15 20:27:40.175149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.970 [2024-07-15 20:27:40.175158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.970 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.175322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.175332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.175471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.175480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.175648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.175657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.175919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.175929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.176165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.176174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.176274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.176283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.176452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.176462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.176594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.176612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.176808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.176817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.176916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.176925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.177125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.177133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.177226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.177234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.177442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.177451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.177552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.177560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.177724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.177734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.177832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.177841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.178002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.178011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.178129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.178138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.178292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.178302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.178566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.178575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.178728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.178737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.178829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.178838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.179006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.179015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.179269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.179279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.179538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.179548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.179634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.179643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.179838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.179847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.180048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.180060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.180282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.180291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.180463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.180472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.180722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.180731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.180972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.180982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.181177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.181186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.181337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.181346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.181534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.181543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.971 [2024-07-15 20:27:40.181730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.971 [2024-07-15 20:27:40.181738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.971 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.181957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.181966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.182038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.182046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.182275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.182284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.182452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.182462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.182713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.182722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.182896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.182905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.183012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.183020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.183314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.183323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.183574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.183583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.183751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.183760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.183913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.183922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.184840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.184848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.185035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.185045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.185196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.185205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.185368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.185377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.185487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.185496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.185586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.185595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.972 [2024-07-15 20:27:40.185814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.972 [2024-07-15 20:27:40.185824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.972 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.185987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.185996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.186214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.186223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.186316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.186326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.186410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.186418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.186529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.186539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.186733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.186742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.186855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.186864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.187046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.187206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.187295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.187393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.187494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.187749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.187993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.188158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.188248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.188427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.188528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.188758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.188857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.188866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.189045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.189190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.189360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.189560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.189888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.189999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.190007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.190168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.190176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.190456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.190465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.190570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.190579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.190847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.190856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.190949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.190957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.191211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.191220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.191385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.191394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.191583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.191593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.191784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.191793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.191958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.191971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.192229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.192238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.973 qpair failed and we were unable to recover it. 00:29:14.973 [2024-07-15 20:27:40.192344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.973 [2024-07-15 20:27:40.192353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.192541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.192550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.192631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.192640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.192799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.192808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.192956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.192965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.193067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.193075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.193295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.193305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.193556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.193566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.193786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.193795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.194027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.194036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.194221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.194236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.194490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.194500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.194613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.194622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.194814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.194827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.194907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.194916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.195948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.195957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.196119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.196128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.196298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.196308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.196473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.196482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.196711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.196720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.196891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.196900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.196998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.197007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.197102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.197111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.197336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.197345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.197482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.197491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.197659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.197668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.197825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.197834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.198934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.198943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.199109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.974 [2024-07-15 20:27:40.199120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.974 qpair failed and we were unable to recover it. 00:29:14.974 [2024-07-15 20:27:40.199227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.199236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.199417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.199426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.199518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.199527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.199686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.199695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.199917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.199926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.200078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.200087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.200286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.200296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.200476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.200486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.200576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.200585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.200775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.200784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.200870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.200878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.201127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.201136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.201283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.201292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.201436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.201445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.201688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.201698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.201797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.201807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.201865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.201874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.202073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.202082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.202263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.202272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.202438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.202447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.202539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.202548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.202732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.202741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.202945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.202955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.203131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.203141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.203251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.203268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.203438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.203447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.203598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.203607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.203703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.203712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.203859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.203868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.204967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.204975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.205218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.205227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.975 qpair failed and we were unable to recover it. 00:29:14.975 [2024-07-15 20:27:40.205329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.975 [2024-07-15 20:27:40.205338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.205510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.205520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.205658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.205669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.205782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.205790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.205879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.205888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.206035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.206045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.206204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.206213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.206386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.206395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.206615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.206624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.206806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.206816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.206942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.206952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.207850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.207998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.208094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.208266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.208443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.208632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.208733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.208849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.208857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.976 qpair failed and we were unable to recover it. 00:29:14.976 [2024-07-15 20:27:40.209950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.976 [2024-07-15 20:27:40.209959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.210979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.210988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.211837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.211845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.212008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.212017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.212099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.212108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.212361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.212521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.212531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.212760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.212769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.212848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.212857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.213061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.213070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.213296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.213306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.213473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.213483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.213643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.213653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.213807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.213816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.213897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.213907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.214963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.214973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.977 [2024-07-15 20:27:40.215703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.977 qpair failed and we were unable to recover it. 00:29:14.977 [2024-07-15 20:27:40.215874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.215883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.215981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.215991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.216167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.216177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.216340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.216350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.216441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.216451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.216673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.216682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.216838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.216848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.216945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.216954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.217052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.217061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.217226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.217237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.217492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.217502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.217674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.217683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.217771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.217781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.217877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.217886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.218065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.218246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.218334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.218570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.218671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.218867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.218993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.219002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.219086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.219096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.219278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.219288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.219460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.219470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.219636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.219646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.219874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.219883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.220904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.220914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.221065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.221075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.221189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.221198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.221289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.221299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.221513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.221522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.221642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.221652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.221868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.221877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.222027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.978 [2024-07-15 20:27:40.222036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.978 qpair failed and we were unable to recover it. 00:29:14.978 [2024-07-15 20:27:40.222095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.222104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.222292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.222302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.222465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.222474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.222708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.222717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.222874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.222883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.223073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.223082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.223178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.223187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.223366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.223376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.223622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.223632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.223881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.223889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.224907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.224917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.225068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.225078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.225234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.225243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.225337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.225346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.225472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.225482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.225633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.225642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.225754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.225763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.226989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.226999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.227147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.227157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.227436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.227446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.227548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.227558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.227775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.227784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.227949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.227959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.228044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.228053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.228126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.228135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.228308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.228318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.228540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.228549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.228715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.979 [2024-07-15 20:27:40.228724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.979 qpair failed and we were unable to recover it. 00:29:14.979 [2024-07-15 20:27:40.228893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.228903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.228975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.228984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.229888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.229898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.230871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.230880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.231071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.231081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.231193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.231202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.231372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.231382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.231569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.231578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.231663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.231672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.231881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.231890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.232833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.232842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.233021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.233030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.233130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.233138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.233231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.233240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.233341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.233351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.233528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.980 [2024-07-15 20:27:40.233537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.980 qpair failed and we were unable to recover it. 00:29:14.980 [2024-07-15 20:27:40.233706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.233715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.233810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.233819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.234025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.234034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.234273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.234283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.234478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.234488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.234578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.234587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.234834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.234843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.234927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.234937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.235835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.235845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.236838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.236847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.237943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.237952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.238043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.238052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.238219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.238228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.238369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.238392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.238485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.238495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.238714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.238724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.238840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.238849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.239011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.239021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.239215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.981 [2024-07-15 20:27:40.239224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.981 qpair failed and we were unable to recover it. 00:29:14.981 [2024-07-15 20:27:40.239423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.239433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.239606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.239616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.239714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.239723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.239875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.239884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.240054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.240064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.240303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.240312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.240525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.240534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.240699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.240709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.240867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.240877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.241172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.241183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.241347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.241356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.241597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.241606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.241698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.241708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.241822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.241831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.241927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.241937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:14.982 [2024-07-15 20:27:40.242982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.982 [2024-07-15 20:27:40.242992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:14.982 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.243838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.243847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.244888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.244991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.245918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.245927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.246009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.246018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.246100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.246110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.246195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.246203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.246296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.246305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.260 qpair failed and we were unable to recover it. 00:29:15.260 [2024-07-15 20:27:40.246458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.260 [2024-07-15 20:27:40.246468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.246568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.246576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.246672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.246682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.246794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.246803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.246895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.247881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.248929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.248938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.249893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.249991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.250122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.250221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.250388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.250508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.250606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.250858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.250867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.251020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.251030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.251178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.251187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.251284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.251295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.251480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.251491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.251570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.251579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.261 [2024-07-15 20:27:40.251670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.261 [2024-07-15 20:27:40.251679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.261 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.251886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.251896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.252928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.252937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.253953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.253965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.254946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.254955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.255902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.255911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.256952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.256962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.257076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.257086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.257265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.257274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.262 qpair failed and we were unable to recover it. 00:29:15.262 [2024-07-15 20:27:40.257366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.262 [2024-07-15 20:27:40.257376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.257535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.257544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.257623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.257631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.257801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.257814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.257888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.257897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.257995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.258107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.258211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.258307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.258472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.258659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.258836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.258845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.259966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.259975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.260222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.260231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.260454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.260464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.260682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.260691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.260776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.260785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.260948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.260957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.261112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.261121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.261205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.261215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.261386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.261396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.261560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.261569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.261668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.261678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.261903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.261912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.262974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.262983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.263080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.263 [2024-07-15 20:27:40.263089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.263 qpair failed and we were unable to recover it. 00:29:15.263 [2024-07-15 20:27:40.263280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.263290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.263443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.263452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.263676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.263688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.263909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.263918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.264867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.264875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.265798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.265808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.266963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.266972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.267055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.267064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.267146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.267156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.267272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.267281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.267365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.264 [2024-07-15 20:27:40.267375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.264 qpair failed and we were unable to recover it. 00:29:15.264 [2024-07-15 20:27:40.267453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.267462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.267557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.267566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.267731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.267741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.267838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.267847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.268940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.268950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.269955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.269964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.270044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.270053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.270203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.270215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.270387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.270397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.270667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.270677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.270786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.270795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.270891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.270900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.271004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.271014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.271265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.271274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.271496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.271505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.271722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.271731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.271810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.271818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.271986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.271995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.272236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.272244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.272535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.272544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.272652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.272661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.272763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.272772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.272871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.272880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.273029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.273039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.273209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.273218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.273405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.273415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.273505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.273513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.265 [2024-07-15 20:27:40.273662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.265 [2024-07-15 20:27:40.273671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.265 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.273921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.273930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.274980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.274988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.275143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.275151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.275395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.275404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.275501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.275509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.275600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.275608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.275699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.275709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.275910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.275919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.276115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.276123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.276232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.276241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.276408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.276417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.276506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.276515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.276589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.276598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.276847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.276855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.277924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.277933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.278949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.278958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.279106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.279117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.279276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.279285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.279400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.279409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.279566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.279574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.279724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.279733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.266 qpair failed and we were unable to recover it. 00:29:15.266 [2024-07-15 20:27:40.279899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.266 [2024-07-15 20:27:40.279908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.280923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.281964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.281973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.282927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.282935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.283985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.283993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.284963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.284971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.285190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.285199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.285309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.285318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.285429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.285438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.285534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.267 [2024-07-15 20:27:40.285543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.267 qpair failed and we were unable to recover it. 00:29:15.267 [2024-07-15 20:27:40.285610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.285619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.285781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.285791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.285878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.285887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.286000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.286008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.286165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.286175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.286341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.286350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.286501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.286510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.286602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.286610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.286762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.286771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.287835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.287844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.288935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.288944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.289960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.289970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.290134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.290143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.268 [2024-07-15 20:27:40.290217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.268 [2024-07-15 20:27:40.290227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.268 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.290331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.290340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.290423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.290432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.290537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.290546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.290693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.290702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.290796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.290805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.290982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.290991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.291772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.291999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.292878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.292887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.293932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.293941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.294898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.294907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.295089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.295097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.295251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.295265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.295346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.295356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.269 [2024-07-15 20:27:40.295437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.269 [2024-07-15 20:27:40.295446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.269 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.295519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.295527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.295610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.295618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.295726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.295735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.295901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.295910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.295990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.295998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.296857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.296994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.297938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.297946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.298942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.298951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.299106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.299114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.299201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.299210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.299503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.299513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.299598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.299606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.299701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.299711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.299856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.299865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.300012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.300021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.300180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.300189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.300294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.300303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.300406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.300417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.300567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.300576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.270 qpair failed and we were unable to recover it. 00:29:15.270 [2024-07-15 20:27:40.300653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.270 [2024-07-15 20:27:40.300661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.300759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.300768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.300905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.300914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.301871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.301880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.302124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.302222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.302390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.302572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.271 [2024-07-15 20:27:40.302677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.302788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.302934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.302945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:15.271 [2024-07-15 20:27:40.303115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.303125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.303316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.303327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.271 [2024-07-15 20:27:40.303425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.303435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.303596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.303607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.271 [2024-07-15 20:27:40.303707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.303716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.303808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.303819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.271 [2024-07-15 20:27:40.303990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.304981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.304991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.271 qpair failed and we were unable to recover it. 00:29:15.271 [2024-07-15 20:27:40.305828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.271 [2024-07-15 20:27:40.305838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.305933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.305942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.306882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.306892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.307897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.307996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.308825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.308986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.309004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.309148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.309162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.309345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.309359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.309582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.309596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.309773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.309787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.309972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.309986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.310086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.310100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.310210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.310224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.310403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.310418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.310530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.310544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.310708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.310722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.310883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.310897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.311084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.311099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.311214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.311233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.311327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.311341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.311443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.311457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.311560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.272 [2024-07-15 20:27:40.311574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.272 qpair failed and we were unable to recover it. 00:29:15.272 [2024-07-15 20:27:40.311676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.311689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.311790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.311804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.311927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.311941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.312071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.312213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.312338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.312581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.312690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.312816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.312994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.313198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.313298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.313442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.313620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.313801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.313917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.313932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.314952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.314963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.315990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.315998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.316113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.316123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.316277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.316286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.316391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.316402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.316489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.316498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.273 [2024-07-15 20:27:40.316581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.273 [2024-07-15 20:27:40.316590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.273 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.316681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.316691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.316858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.316869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.317845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.317854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.318783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.318792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.319932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.319942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.320932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.320942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.321056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.321169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.321266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.321450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.321555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.274 [2024-07-15 20:27:40.321657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.274 qpair failed and we were unable to recover it. 00:29:15.274 [2024-07-15 20:27:40.321805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.321813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.321905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.321916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.322911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.322920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.323913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.323922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.324922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.324931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.325948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.325957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.326122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.326131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.326221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.326229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.326339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.326348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.326444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.326453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.326536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.275 [2024-07-15 20:27:40.326545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.275 qpair failed and we were unable to recover it. 00:29:15.275 [2024-07-15 20:27:40.326628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.326636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.326720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.326729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.326835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.326844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.326931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.326940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.327927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.327936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.328845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.328855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.329969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.329978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.330887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.330895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.331048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.331057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.331142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.331152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.331252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.276 [2024-07-15 20:27:40.331265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.276 qpair failed and we were unable to recover it. 00:29:15.276 [2024-07-15 20:27:40.331367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.331377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.331464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.331473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.331551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.331559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.331639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.331649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.331764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.331773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.332940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.332949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.333964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.333973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.334986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.334995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.335145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.335154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.277 [2024-07-15 20:27:40.335240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.277 [2024-07-15 20:27:40.335249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.277 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.335373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.335382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.335573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.335582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.335745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.335754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.335834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.335843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.335926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.335934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.336897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.336906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.337930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.337939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.278 [2024-07-15 20:27:40.338682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.338982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.338992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.278 [2024-07-15 20:27:40.339085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.339251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.339374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.278 [2024-07-15 20:27:40.339482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.339585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.278 [2024-07-15 20:27:40.339686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.339764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.339858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.339964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.278 [2024-07-15 20:27:40.339973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.278 qpair failed and we were unable to recover it. 00:29:15.278 [2024-07-15 20:27:40.340127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.340833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.340994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.341950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.341959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.342963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.342972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.343956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.343966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.344059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.344069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.344153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.344162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.344318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.344328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.344450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.344459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.344613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.344622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.279 [2024-07-15 20:27:40.344717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.279 [2024-07-15 20:27:40.344726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.279 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.344828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.344837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.344935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.344944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.345931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.345939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.346941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.346950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.347836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.347998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.348936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.348945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.349022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.349030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.349102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.349111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.349218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.349228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.349505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.349515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.280 [2024-07-15 20:27:40.349598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.280 [2024-07-15 20:27:40.349607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.280 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.349691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.349700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.349860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.349869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.350963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.350972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.351902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.351912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.281 [2024-07-15 20:27:40.352713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.281 qpair failed and we were unable to recover it. 00:29:15.281 [2024-07-15 20:27:40.352803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.352814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.352894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.352903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.352974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.352983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.353988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.353997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.354981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.354989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.355951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.355961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.356930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.356939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.357089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.357099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.357196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.357206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.282 qpair failed and we were unable to recover it. 00:29:15.282 [2024-07-15 20:27:40.357292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.282 [2024-07-15 20:27:40.357302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.357389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.357400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.357491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.357501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.357679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.357689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.357780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.357789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.357970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.357980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.358938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.358947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.359044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.359053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.359140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.359149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.359309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.359320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.359411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.359420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.359667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.359678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.359840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.359851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.360952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.360961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.361968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.361977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.362056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.362065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.362305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.362315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.362491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.362500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.362578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.283 [2024-07-15 20:27:40.362586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.283 qpair failed and we were unable to recover it. 00:29:15.283 [2024-07-15 20:27:40.362756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.362766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.362939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.362948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.363881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.363890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.364968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.364978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.365915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.365924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 Malloc0 00:29:15.284 [2024-07-15 20:27:40.366467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.366862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.366942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.284 [2024-07-15 20:27:40.366952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.367134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.367143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.367240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.367250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:15.284 [2024-07-15 20:27:40.367425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.367436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.284 [2024-07-15 20:27:40.367588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.284 [2024-07-15 20:27:40.367598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.284 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.284 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.367689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.367700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.367780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.285 [2024-07-15 20:27:40.367939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.367950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.368900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.368992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.369982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.369991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.370984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.370993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.371974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.371983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.372084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.372093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.372268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.372278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.372371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.372381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.372463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.285 [2024-07-15 20:27:40.372473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.285 qpair failed and we were unable to recover it. 00:29:15.285 [2024-07-15 20:27:40.372559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.372569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.372676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.372685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.372865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.372874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.372969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.372978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.373978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.373989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.286 [2024-07-15 20:27:40.374325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f370c000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.374979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.374993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.375101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.375214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.375392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.375508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.375682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.375874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.375993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.376173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.376354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.376481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.376593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.376780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.376880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.376894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.286 [2024-07-15 20:27:40.377072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.286 [2024-07-15 20:27:40.377086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.286 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.377266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.377281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.377564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.377578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.377698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.377712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.377942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.377955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.378924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.378938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f36fc000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.379969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.379980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.380974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.380983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.381905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.381994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.382004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.382162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.382171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.382275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.382285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.382363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.382372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.287 [2024-07-15 20:27:40.382473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.287 [2024-07-15 20:27:40.382482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.287 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.382571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.382581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.288 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:15.288 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.288 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.288 [2024-07-15 20:27:40.383323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.383341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.383526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.383536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.383699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.383708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.383867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.383876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.383967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.383976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.384139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.384149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.384228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.384237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.384350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.384360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.384514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.384523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.384604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.384614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.384807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.384817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.385973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.385982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.386942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.386951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.387953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.387961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.388055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.388064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.388166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.388175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.288 qpair failed and we were unable to recover it. 00:29:15.288 [2024-07-15 20:27:40.388323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.288 [2024-07-15 20:27:40.388333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.388508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.388518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.388669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.388678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.388851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.388861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.388973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.388982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.389147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.389156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.389323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.389332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.389434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.389443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.389606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.389615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.389770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.389778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.389925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.389934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.289 [2024-07-15 20:27:40.390891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.390982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.390991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.289 [2024-07-15 20:27:40.391154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.391163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.289 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.289 [2024-07-15 20:27:40.391381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.391391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.391481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.391489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.391661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.391670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.391758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.391767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.391950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.391960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.392827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.392836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.289 [2024-07-15 20:27:40.393762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.289 [2024-07-15 20:27:40.393771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.289 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.393952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.393960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.394972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.394981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.395142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.395151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.395333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.395343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.395506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.395515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.395671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.395680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.395841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.395850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.395948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.395957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.396918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.396926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.397086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.397095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.397259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.397269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.397333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.397343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.397501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.397510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.397670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.397679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.397812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.397820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.398000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.398009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.398233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.398242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.398342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.398351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.398482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.398491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 [2024-07-15 20:27:40.398758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.290 [2024-07-15 20:27:40.398767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.290 qpair failed and we were unable to recover it. 00:29:15.290 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.290 [2024-07-15 20:27:40.398889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.398899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.398992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.291 [2024-07-15 20:27:40.399001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.291 [2024-07-15 20:27:40.399151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.399161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.291 [2024-07-15 20:27:40.399321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.399331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.399504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.399513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.399609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.399617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.399735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.399745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.399961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.399970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.400888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.400896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.401914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.402907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.291 [2024-07-15 20:27:40.402915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3704000b90 with addr=10.0.0.2, port=4420 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 [2024-07-15 20:27:40.403145] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.291 [2024-07-15 20:27:40.405007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.291 [2024-07-15 20:27:40.405115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.291 [2024-07-15 20:27:40.405133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.291 [2024-07-15 20:27:40.405142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.291 [2024-07-15 20:27:40.405148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.291 [2024-07-15 20:27:40.405167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.291 [2024-07-15 20:27:40.414969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.291 [2024-07-15 20:27:40.415063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.291 [2024-07-15 20:27:40.415080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.291 [2024-07-15 20:27:40.415086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.291 [2024-07-15 20:27:40.415092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.291 [2024-07-15 20:27:40.415107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.291 qpair failed and we were unable to recover it. 00:29:15.291 20:27:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 216782 00:29:15.291 [2024-07-15 20:27:40.424954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.291 [2024-07-15 20:27:40.425074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.291 [2024-07-15 20:27:40.425091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.291 [2024-07-15 20:27:40.425098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.425104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.425118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.435147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.435272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.435288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.435294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.435300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.435315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.444941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.445032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.445048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.445056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.445062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.445076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.455027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.455117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.455133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.455139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.455145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.455159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.465015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.465111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.465126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.465132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.465138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.465152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.475178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.475320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.475336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.475343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.475348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.475363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.485085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.485182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.485198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.485204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.485210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.485225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.495067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.495155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.495172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.495178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.495183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.495198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.505146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.505228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.505244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.505250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.505260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.505275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.515377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.515501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.515517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.515523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.515529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.515543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.525272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.525361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.525376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.525383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.525388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.525402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.535292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.535385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.535400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.535409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.535414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.535428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.545250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.545342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.545358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.545364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.545370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.545383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.555509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.555625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.555640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.555647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.555652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.555666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.565295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.292 [2024-07-15 20:27:40.565381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.292 [2024-07-15 20:27:40.565396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.292 [2024-07-15 20:27:40.565402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.292 [2024-07-15 20:27:40.565408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.292 [2024-07-15 20:27:40.565422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.292 qpair failed and we were unable to recover it. 00:29:15.292 [2024-07-15 20:27:40.575417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.293 [2024-07-15 20:27:40.575503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.293 [2024-07-15 20:27:40.575518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.293 [2024-07-15 20:27:40.575525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.293 [2024-07-15 20:27:40.575530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.293 [2024-07-15 20:27:40.575543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.293 qpair failed and we were unable to recover it. 00:29:15.293 [2024-07-15 20:27:40.585407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.293 [2024-07-15 20:27:40.585487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.293 [2024-07-15 20:27:40.585502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.293 [2024-07-15 20:27:40.585508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.293 [2024-07-15 20:27:40.585514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.293 [2024-07-15 20:27:40.585528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.293 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.595654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.595764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.595781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.595788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.595794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.595809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.605415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.605501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.605517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.605523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.605529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.605543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.615523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.615607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.615622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.615629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.615635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.615649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.625571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.625652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.625670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.625676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.625682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.625696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.635750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.635856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.635872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.635878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.635883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.635897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.645628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.645735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.645750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.645757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.645762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.645776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.655624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.655736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.655752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.655760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.655766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.655779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.665649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.665734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.665749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.665756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.665761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.665778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.675791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.675899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.675914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.675920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.675926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.675940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.685852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.685970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.685985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.685991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.685996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.686010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.695816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.695897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.554 [2024-07-15 20:27:40.695911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.554 [2024-07-15 20:27:40.695917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.554 [2024-07-15 20:27:40.695923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.554 [2024-07-15 20:27:40.695936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.554 qpair failed and we were unable to recover it. 00:29:15.554 [2024-07-15 20:27:40.705812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.554 [2024-07-15 20:27:40.705891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.705906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.705913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.705918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.705932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.716108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.716218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.716237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.716243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.716249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.716269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.725859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.725945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.725961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.725967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.725973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.725987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.735858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.735938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.735953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.735959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.735965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.735979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.745888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.745974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.745990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.745997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.746002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.746016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.756280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.756433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.756449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.756456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.756464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.756479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.765940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.766024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.766038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.766045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.766051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.766065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.775967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.776050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.776065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.776072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.776077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.776091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.786057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.786151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.786166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.786172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.786178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.786192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.796213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.796339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.796355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.796361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.796367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.796381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.805996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.806091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.806106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.806113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.806119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.806133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.816111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.816242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.816264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.816271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.816276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.816291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.826135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.826215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.826230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.826236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.826242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.826262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.836352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.836461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.836476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.836483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.555 [2024-07-15 20:27:40.836489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.555 [2024-07-15 20:27:40.836502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.555 qpair failed and we were unable to recover it. 00:29:15.555 [2024-07-15 20:27:40.846196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.555 [2024-07-15 20:27:40.846281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.555 [2024-07-15 20:27:40.846296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.555 [2024-07-15 20:27:40.846302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.556 [2024-07-15 20:27:40.846311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.556 [2024-07-15 20:27:40.846325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-15 20:27:40.856178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.556 [2024-07-15 20:27:40.856263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.556 [2024-07-15 20:27:40.856279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.556 [2024-07-15 20:27:40.856285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.556 [2024-07-15 20:27:40.856290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.556 [2024-07-15 20:27:40.856305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-15 20:27:40.866290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.556 [2024-07-15 20:27:40.866367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.556 [2024-07-15 20:27:40.866382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.556 [2024-07-15 20:27:40.866389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.556 [2024-07-15 20:27:40.866394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.556 [2024-07-15 20:27:40.866408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-15 20:27:40.876480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.556 [2024-07-15 20:27:40.876589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.556 [2024-07-15 20:27:40.876605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.556 [2024-07-15 20:27:40.876611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.556 [2024-07-15 20:27:40.876617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.556 [2024-07-15 20:27:40.876630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-15 20:27:40.886363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.556 [2024-07-15 20:27:40.886446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.556 [2024-07-15 20:27:40.886461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.556 [2024-07-15 20:27:40.886468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.556 [2024-07-15 20:27:40.886473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.556 [2024-07-15 20:27:40.886487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.556 [2024-07-15 20:27:40.896336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.556 [2024-07-15 20:27:40.896423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.556 [2024-07-15 20:27:40.896438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.556 [2024-07-15 20:27:40.896445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.556 [2024-07-15 20:27:40.896450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.556 [2024-07-15 20:27:40.896464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.556 qpair failed and we were unable to recover it. 00:29:15.816 [2024-07-15 20:27:40.906424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.816 [2024-07-15 20:27:40.906502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.816 [2024-07-15 20:27:40.906518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.816 [2024-07-15 20:27:40.906525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.816 [2024-07-15 20:27:40.906530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.816 [2024-07-15 20:27:40.906545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.816 qpair failed and we were unable to recover it. 00:29:15.816 [2024-07-15 20:27:40.916604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.816 [2024-07-15 20:27:40.916721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.816 [2024-07-15 20:27:40.916736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.816 [2024-07-15 20:27:40.916743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.816 [2024-07-15 20:27:40.916749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.816 [2024-07-15 20:27:40.916763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.816 qpair failed and we were unable to recover it. 00:29:15.816 [2024-07-15 20:27:40.926422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.816 [2024-07-15 20:27:40.926516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.926531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.926537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.926544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.926558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.936469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.936548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.936563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.936572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.936578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.936591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.946495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.946571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.946585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.946593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.946598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.946612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.956727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.956838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.956854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.956860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.956866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.956880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.966563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.966647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.966662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.966668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.966674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.966688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.976548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.976673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.976688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.976695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.976701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.976715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.986650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.986725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.986740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.986746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.986752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.986765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:40.996829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:40.996937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:40.996952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:40.996958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:40.996964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:40.996979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:41.006681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:41.006765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:41.006780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:41.006786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:41.006792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:41.006806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:41.016714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:41.016798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:41.016814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:41.016820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:41.016826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:41.016840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:41.026732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:41.026808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:41.026829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:41.026836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:41.026841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:41.026855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:41.036962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:41.037070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:41.037085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.817 [2024-07-15 20:27:41.037091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.817 [2024-07-15 20:27:41.037096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.817 [2024-07-15 20:27:41.037110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.817 qpair failed and we were unable to recover it. 00:29:15.817 [2024-07-15 20:27:41.046813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.817 [2024-07-15 20:27:41.046903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.817 [2024-07-15 20:27:41.046920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.046926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.046932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:15.818 [2024-07-15 20:27:41.046946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.057056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.057204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.057275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.057302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.057325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.057372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.066889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.067004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.067034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.067050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.067064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.067101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.077087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.077214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.077237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.077247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.077261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.077282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.086929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.087048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.087070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.087080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.087090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.087110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.096962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.097059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.097080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.097090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.097099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.097120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.107021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.107122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.107143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.107154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.107163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.107184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.117243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.117375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.117401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.117412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.117423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.117444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.127038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.127132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.127154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.127163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.127173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.127194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.137150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.137291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.137314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.137325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.137333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.137354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.147117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.147215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.147236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.818 [2024-07-15 20:27:41.147246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.818 [2024-07-15 20:27:41.147261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.818 [2024-07-15 20:27:41.147282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.818 qpair failed and we were unable to recover it. 00:29:15.818 [2024-07-15 20:27:41.157341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.818 [2024-07-15 20:27:41.157464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.818 [2024-07-15 20:27:41.157486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.819 [2024-07-15 20:27:41.157496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.819 [2024-07-15 20:27:41.157506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:15.819 [2024-07-15 20:27:41.157530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.819 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.167200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.167380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.167402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.167413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.167424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.167445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.177246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.177386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.177408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.177418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.177427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.177448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.187270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.187358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.187379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.187390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.187399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.187419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.197478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.197600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.197621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.197631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.197641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.197660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.207353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.207529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.207554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.207564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.207574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.207595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.217358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.217496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.217517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.217527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.217536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.217557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.227369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.227462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.227483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.227493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.227503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.227523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.237639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.237763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.237784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.237795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.237805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.079 [2024-07-15 20:27:41.237824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.079 qpair failed and we were unable to recover it. 00:29:16.079 [2024-07-15 20:27:41.247464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.079 [2024-07-15 20:27:41.247564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.079 [2024-07-15 20:27:41.247585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.079 [2024-07-15 20:27:41.247595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.079 [2024-07-15 20:27:41.247605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.247630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.257493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.257591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.257612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.257622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.257631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.257650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.267532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.267621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.267641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.267652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.267660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.267681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.277743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.277897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.277918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.277927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.277937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.277957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.287576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.287680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.287700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.287710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.287720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.287739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.297611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.297738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.297762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.297772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.297781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.297801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.307647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.307770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.307791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.307801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.307811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.307832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.317844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.317968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.317990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.318000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.318010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.318030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.327683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.327788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.327809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.327819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.327830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.327850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.337716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.337810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.337831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.337841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.337854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.337874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.347697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.347787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.347808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.347818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.347828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.347848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.358006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.358121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.358142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.358152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.358162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.358181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.367838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.367930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.367951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.367962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.367971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.367991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.377867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.377969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.377990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.378000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.378008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.378028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.080 [2024-07-15 20:27:41.387898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.080 [2024-07-15 20:27:41.387998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.080 [2024-07-15 20:27:41.388019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.080 [2024-07-15 20:27:41.388029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.080 [2024-07-15 20:27:41.388038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.080 [2024-07-15 20:27:41.388059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.080 qpair failed and we were unable to recover it. 00:29:16.081 [2024-07-15 20:27:41.398206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.081 [2024-07-15 20:27:41.398360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.081 [2024-07-15 20:27:41.398382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.081 [2024-07-15 20:27:41.398392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.081 [2024-07-15 20:27:41.398400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.081 [2024-07-15 20:27:41.398422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.081 qpair failed and we were unable to recover it. 00:29:16.081 [2024-07-15 20:27:41.407989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.081 [2024-07-15 20:27:41.408113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.081 [2024-07-15 20:27:41.408134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.081 [2024-07-15 20:27:41.408144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.081 [2024-07-15 20:27:41.408154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.081 [2024-07-15 20:27:41.408175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.081 qpair failed and we were unable to recover it. 00:29:16.081 [2024-07-15 20:27:41.418020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.081 [2024-07-15 20:27:41.418122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.081 [2024-07-15 20:27:41.418143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.081 [2024-07-15 20:27:41.418153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.081 [2024-07-15 20:27:41.418162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.081 [2024-07-15 20:27:41.418182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.081 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.428066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.428165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.428186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.428197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.428210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.428230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.438343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.438465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.438486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.438496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.438506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.438527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.448045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.448140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.448161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.448171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.448181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.448201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.458182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.458326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.458348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.458359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.458368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.458389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.468150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.468243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.468272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.468283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.468291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.468312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.478381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.478509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.478530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.478540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.478550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.478570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.488205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.488302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.488323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.488333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.488342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.488362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.498284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.498385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.498406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.498416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.498425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.498445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.508272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.508367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.508388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.508398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.508408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.508428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.342 qpair failed and we were unable to recover it. 00:29:16.342 [2024-07-15 20:27:41.518506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.342 [2024-07-15 20:27:41.518626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.342 [2024-07-15 20:27:41.518647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.342 [2024-07-15 20:27:41.518657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.342 [2024-07-15 20:27:41.518671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.342 [2024-07-15 20:27:41.518691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.528336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.528441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.528461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.528471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.528481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.528501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.538387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.538515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.538535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.538545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.538554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.538574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.548426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.548521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.548542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.548552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.548561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.548581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.558702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.558821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.558842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.558852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.558862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.558882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.568520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.568663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.568684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.568693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.568703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.568723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.578503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.578593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.578615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.578625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.578633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.578654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.588521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.588609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.588630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.588640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.588650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.588669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.598759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.598881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.598901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.598911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.598921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.598940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.608578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.608671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.608691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.608706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.608717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.608737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.618618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.618722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.618744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.618754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.618763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.618782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.628625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.628741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.628762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.628773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.628782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.628803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.638859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.638989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.639011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.639020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.639032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.639051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.648759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.648894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.648914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.648924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.648933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.648954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.658794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.343 [2024-07-15 20:27:41.658887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.343 [2024-07-15 20:27:41.658908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.343 [2024-07-15 20:27:41.658918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.343 [2024-07-15 20:27:41.658927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.343 [2024-07-15 20:27:41.658946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.343 qpair failed and we were unable to recover it. 00:29:16.343 [2024-07-15 20:27:41.668763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.344 [2024-07-15 20:27:41.668882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.344 [2024-07-15 20:27:41.668902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.344 [2024-07-15 20:27:41.668912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.344 [2024-07-15 20:27:41.668922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.344 [2024-07-15 20:27:41.668942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.344 qpair failed and we were unable to recover it. 00:29:16.344 [2024-07-15 20:27:41.679033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.344 [2024-07-15 20:27:41.679152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.344 [2024-07-15 20:27:41.679172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.344 [2024-07-15 20:27:41.679182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.344 [2024-07-15 20:27:41.679191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.344 [2024-07-15 20:27:41.679211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.344 qpair failed and we were unable to recover it. 00:29:16.344 [2024-07-15 20:27:41.688832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.344 [2024-07-15 20:27:41.688928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.344 [2024-07-15 20:27:41.688949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.344 [2024-07-15 20:27:41.688959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.344 [2024-07-15 20:27:41.688968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.344 [2024-07-15 20:27:41.688988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.344 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.698862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.698994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.699015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.699029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.699039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.699059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.708917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.709059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.709081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.709091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.709100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.709121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.719146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.719272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.719294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.719305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.719315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.719335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.729019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.729182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.729203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.729213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.729224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.729245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.738980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.739076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.739096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.739106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.739115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.739135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.749023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.749120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.749141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.749151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.749160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.749181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.759243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.759400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.759421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.759431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.759440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.759462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.769064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.769171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.769193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.769203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.769213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.769234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.779121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.779217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.779239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.779249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.779264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.779287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.789145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.789237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.789264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.789280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.605 [2024-07-15 20:27:41.789293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.605 [2024-07-15 20:27:41.789315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.605 qpair failed and we were unable to recover it. 00:29:16.605 [2024-07-15 20:27:41.799370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.605 [2024-07-15 20:27:41.799496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.605 [2024-07-15 20:27:41.799517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.605 [2024-07-15 20:27:41.799527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.799539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.799559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.809202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.809311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.809332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.809342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.809352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.809372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.819265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.819366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.819388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.819398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.819406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.819427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.829213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.829335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.829357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.829366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.829375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.829396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.839502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.839650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.839671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.839681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.839690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.839712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.849362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.849461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.849482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.849493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.849504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.849524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.859391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.859506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.859527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.859538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.859546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.859567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.869436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.869525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.869546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.869555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.869565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.869584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.879652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.879810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.879835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.879845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.879855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.879874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.889481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.889581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.889602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.889612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.889622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.889641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.899491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.899593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.899614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.899624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.899635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.899655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.909543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.909640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.909661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.909670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.909680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.909700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.919735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.919852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.919874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.919883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.919893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.919912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.929638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.929732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.929753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.929763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.929772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.929792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.939704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.606 [2024-07-15 20:27:41.939799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.606 [2024-07-15 20:27:41.939820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.606 [2024-07-15 20:27:41.939830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.606 [2024-07-15 20:27:41.939839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.606 [2024-07-15 20:27:41.939859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.606 qpair failed and we were unable to recover it. 00:29:16.606 [2024-07-15 20:27:41.949663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.607 [2024-07-15 20:27:41.949753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.607 [2024-07-15 20:27:41.949774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.607 [2024-07-15 20:27:41.949784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.607 [2024-07-15 20:27:41.949793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.607 [2024-07-15 20:27:41.949814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.607 qpair failed and we were unable to recover it. 00:29:16.867 [2024-07-15 20:27:41.959940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.867 [2024-07-15 20:27:41.960114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.867 [2024-07-15 20:27:41.960135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.867 [2024-07-15 20:27:41.960146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.867 [2024-07-15 20:27:41.960157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.867 [2024-07-15 20:27:41.960176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.867 qpair failed and we were unable to recover it. 00:29:16.867 [2024-07-15 20:27:41.969693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.867 [2024-07-15 20:27:41.969788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.867 [2024-07-15 20:27:41.969816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.867 [2024-07-15 20:27:41.969828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.867 [2024-07-15 20:27:41.969837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.867 [2024-07-15 20:27:41.969858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.867 qpair failed and we were unable to recover it. 00:29:16.867 [2024-07-15 20:27:41.979842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.867 [2024-07-15 20:27:41.979965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.867 [2024-07-15 20:27:41.979986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.867 [2024-07-15 20:27:41.979996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.867 [2024-07-15 20:27:41.980006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.867 [2024-07-15 20:27:41.980027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.867 qpair failed and we were unable to recover it. 00:29:16.867 [2024-07-15 20:27:41.989746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.867 [2024-07-15 20:27:41.989866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.867 [2024-07-15 20:27:41.989886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.867 [2024-07-15 20:27:41.989897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.867 [2024-07-15 20:27:41.989907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.867 [2024-07-15 20:27:41.989926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.867 qpair failed and we were unable to recover it. 00:29:16.867 [2024-07-15 20:27:42.000070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.000187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.000208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.000218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.000229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.000248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.009870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.009973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.009994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.010004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.010013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.010039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.019906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.020000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.020022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.020032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.020042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.020063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.029958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.030047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.030068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.030079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.030088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.030108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.040182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.040310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.040331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.040341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.040359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.040384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.049967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.050090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.050111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.050121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.050131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.050151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.059980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.060074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.060099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.060109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.060118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.060138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.070092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.070177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.070200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.070210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.070220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.070240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.080237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.080372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.080394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.080403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.080413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.080433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.090122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.090231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.090252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.090268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.090278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.090298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.100192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.100295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.100316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.100326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.100335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.100360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.110121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.110213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.110234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.110244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.110259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.110280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.120372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.120495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.120517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.120527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.120537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.120557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.130198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.130321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.868 [2024-07-15 20:27:42.130342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.868 [2024-07-15 20:27:42.130352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.868 [2024-07-15 20:27:42.130362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.868 [2024-07-15 20:27:42.130383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.868 qpair failed and we were unable to recover it. 00:29:16.868 [2024-07-15 20:27:42.140298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.868 [2024-07-15 20:27:42.140389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.140410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.140419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.140428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.140447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.150363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.150455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.150480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.150490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.150500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.150520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.160542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.160712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.160733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.160743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.160752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.160772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.170425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.170523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.170543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.170553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.170562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.170581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.180416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.180512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.180533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.180543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.180554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.180575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.190465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.190561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.190582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.190591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.190601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.190625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.200642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.200790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.200810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.200821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.200830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.200851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:16.869 [2024-07-15 20:27:42.210498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.869 [2024-07-15 20:27:42.210596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.869 [2024-07-15 20:27:42.210618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.869 [2024-07-15 20:27:42.210628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.869 [2024-07-15 20:27:42.210638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:16.869 [2024-07-15 20:27:42.210659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.869 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-15 20:27:42.220545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-15 20:27:42.220657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-15 20:27:42.220679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-15 20:27:42.220689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-15 20:27:42.220700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.129 [2024-07-15 20:27:42.220721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-15 20:27:42.230554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-15 20:27:42.230650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-15 20:27:42.230671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-15 20:27:42.230681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-15 20:27:42.230692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.129 [2024-07-15 20:27:42.230713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-15 20:27:42.240819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-15 20:27:42.240945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-15 20:27:42.240970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-15 20:27:42.240980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-15 20:27:42.240990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.129 [2024-07-15 20:27:42.241010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-15 20:27:42.250662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-15 20:27:42.250798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-15 20:27:42.250819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.129 [2024-07-15 20:27:42.250829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.129 [2024-07-15 20:27:42.250839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.129 [2024-07-15 20:27:42.250859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.129 qpair failed and we were unable to recover it. 00:29:17.129 [2024-07-15 20:27:42.260717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.129 [2024-07-15 20:27:42.260854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.129 [2024-07-15 20:27:42.260875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.260885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.260894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.260915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.270753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.270855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.270876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.270887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.270896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.270916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.280966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.281086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.281107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.281118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.281131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.281151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.290844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.290982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.291003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.291013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.291022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.291042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.300876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.301013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.301034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.301044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.301053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.301075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.310856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.310985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.311006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.311016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.311026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.311046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.321141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.321271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.321293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.321303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.321311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.321333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.330920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.331042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.331064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.331074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.331084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.331104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.340937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.341056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.341078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.341087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.341097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.341118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.350987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.351084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.351105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.351115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.351124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.351144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.361272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.361392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.361413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.361423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.361432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.361453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.371062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.371158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.371179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.371189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.371203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.371222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.381015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.381112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.381134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.381144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.381152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.381172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.391088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.391180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.391200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.391211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.391221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.391240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.130 [2024-07-15 20:27:42.401378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.130 [2024-07-15 20:27:42.401541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.130 [2024-07-15 20:27:42.401562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.130 [2024-07-15 20:27:42.401572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.130 [2024-07-15 20:27:42.401581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.130 [2024-07-15 20:27:42.401602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.130 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.411153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.411273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.411294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.411304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.411313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.411335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.421193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.421328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.421349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.421358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.421367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.421386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.431161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.431299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.431320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.431330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.431339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.431360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.441454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.441574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.441595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.441605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.441615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.441634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.451287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.451389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.451410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.451420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.451431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.451451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.461332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.461424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.461445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.461455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.461468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.461488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.131 [2024-07-15 20:27:42.471369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.131 [2024-07-15 20:27:42.471466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.131 [2024-07-15 20:27:42.471487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.131 [2024-07-15 20:27:42.471497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.131 [2024-07-15 20:27:42.471506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.131 [2024-07-15 20:27:42.471526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.131 qpair failed and we were unable to recover it. 00:29:17.391 [2024-07-15 20:27:42.481608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.391 [2024-07-15 20:27:42.481731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.391 [2024-07-15 20:27:42.481752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.391 [2024-07-15 20:27:42.481762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.391 [2024-07-15 20:27:42.481772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.391 [2024-07-15 20:27:42.481792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.391 qpair failed and we were unable to recover it. 00:29:17.391 [2024-07-15 20:27:42.491426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.391 [2024-07-15 20:27:42.491524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.391 [2024-07-15 20:27:42.491546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.391 [2024-07-15 20:27:42.491555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.391 [2024-07-15 20:27:42.491565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.391 [2024-07-15 20:27:42.491585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.391 qpair failed and we were unable to recover it. 00:29:17.391 [2024-07-15 20:27:42.501447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.391 [2024-07-15 20:27:42.501545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.391 [2024-07-15 20:27:42.501566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.391 [2024-07-15 20:27:42.501576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.391 [2024-07-15 20:27:42.501585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.391 [2024-07-15 20:27:42.501604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.391 qpair failed and we were unable to recover it. 00:29:17.391 [2024-07-15 20:27:42.511401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.391 [2024-07-15 20:27:42.511497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.391 [2024-07-15 20:27:42.511518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.391 [2024-07-15 20:27:42.511528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.391 [2024-07-15 20:27:42.511538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.391 [2024-07-15 20:27:42.511558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.391 qpair failed and we were unable to recover it. 00:29:17.391 [2024-07-15 20:27:42.521621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.391 [2024-07-15 20:27:42.521742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.391 [2024-07-15 20:27:42.521764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.391 [2024-07-15 20:27:42.521774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.391 [2024-07-15 20:27:42.521784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.521804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.531567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.531659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.531680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.531690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.531700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.531720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.541560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.541660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.541682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.541692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.541700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.541721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.551631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.551719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.551740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.551755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.551765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.551785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.561787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.561909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.561930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.561941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.561951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.561971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.571661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.571758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.571780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.571790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.571800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.571819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.581752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.581853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.581875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.581885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.581894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.581914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.591716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.591810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.591832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.591841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.591852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.591872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.602017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.602185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.602205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.602216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.602226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.602246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.611820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.611926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.611947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.611956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.611966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.611986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.621898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.622040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.622062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.622071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.622081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.622101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.631873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.631988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.632009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.632020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.632029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.632050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.642099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.642231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.642253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.642273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.642282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.642301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.651943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.652034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.652055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.652065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.652075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.652095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.661987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.662085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.392 [2024-07-15 20:27:42.662106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.392 [2024-07-15 20:27:42.662116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.392 [2024-07-15 20:27:42.662125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.392 [2024-07-15 20:27:42.662145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.392 qpair failed and we were unable to recover it. 00:29:17.392 [2024-07-15 20:27:42.671930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.392 [2024-07-15 20:27:42.672057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.672079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.672089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.672098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.672119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.393 [2024-07-15 20:27:42.682220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.393 [2024-07-15 20:27:42.682390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.682411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.682422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.682430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.682451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.393 [2024-07-15 20:27:42.692067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.393 [2024-07-15 20:27:42.692160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.692181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.692191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.692201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.692221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.393 [2024-07-15 20:27:42.702091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.393 [2024-07-15 20:27:42.702195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.702215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.702225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.702234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.702261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.393 [2024-07-15 20:27:42.712110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.393 [2024-07-15 20:27:42.712228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.712249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.712272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.712281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.712303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.393 [2024-07-15 20:27:42.722375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.393 [2024-07-15 20:27:42.722523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.722545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.722556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.722565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.722585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.393 [2024-07-15 20:27:42.732249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.393 [2024-07-15 20:27:42.732395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.393 [2024-07-15 20:27:42.732416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.393 [2024-07-15 20:27:42.732429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.393 [2024-07-15 20:27:42.732439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.393 [2024-07-15 20:27:42.732460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.393 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.742227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.742364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.653 [2024-07-15 20:27:42.742386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.653 [2024-07-15 20:27:42.742396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.653 [2024-07-15 20:27:42.742405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.653 [2024-07-15 20:27:42.742424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.653 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.752312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.752446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.653 [2024-07-15 20:27:42.752467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.653 [2024-07-15 20:27:42.752477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.653 [2024-07-15 20:27:42.752486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.653 [2024-07-15 20:27:42.752506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.653 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.762520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.762641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.653 [2024-07-15 20:27:42.762662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.653 [2024-07-15 20:27:42.762672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.653 [2024-07-15 20:27:42.762681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.653 [2024-07-15 20:27:42.762701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.653 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.772375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.772516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.653 [2024-07-15 20:27:42.772538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.653 [2024-07-15 20:27:42.772548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.653 [2024-07-15 20:27:42.772557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.653 [2024-07-15 20:27:42.772577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.653 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.782383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.782477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.653 [2024-07-15 20:27:42.782499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.653 [2024-07-15 20:27:42.782509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.653 [2024-07-15 20:27:42.782518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.653 [2024-07-15 20:27:42.782538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.653 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.792403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.792521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.653 [2024-07-15 20:27:42.792542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.653 [2024-07-15 20:27:42.792552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.653 [2024-07-15 20:27:42.792562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.653 [2024-07-15 20:27:42.792581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.653 qpair failed and we were unable to recover it. 00:29:17.653 [2024-07-15 20:27:42.802636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.653 [2024-07-15 20:27:42.802755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.802775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.802786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.802795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.802815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.812462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.812590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.812612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.812621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.812631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.812651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.822478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.822574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.822599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.822609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.822617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.822636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.832565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.832657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.832677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.832687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.832697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.832716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.842764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.842929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.842950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.842961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.842971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.842991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.852600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.852712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.852733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.852743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.852753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.852772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.862633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.862762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.862784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.862793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.862802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.862823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.872681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.872775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.872797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.872807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.872817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.872836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.882919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.883091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.883112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.883122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.883132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.883152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.892782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.892876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.892897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.892908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.892916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.892937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.902753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.902846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.902867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.902877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.902886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.902907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.912777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.912863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.912888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.912898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.912907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.912927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.923048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.923190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.923211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.923221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.923231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.923252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.932846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.932948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.932969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.932979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.932988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.654 [2024-07-15 20:27:42.933008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.654 qpair failed and we were unable to recover it. 00:29:17.654 [2024-07-15 20:27:42.942911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.654 [2024-07-15 20:27:42.943029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.654 [2024-07-15 20:27:42.943050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.654 [2024-07-15 20:27:42.943060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.654 [2024-07-15 20:27:42.943068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.655 [2024-07-15 20:27:42.943088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.655 qpair failed and we were unable to recover it. 00:29:17.655 [2024-07-15 20:27:42.952951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.655 [2024-07-15 20:27:42.953042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.655 [2024-07-15 20:27:42.953062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.655 [2024-07-15 20:27:42.953073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.655 [2024-07-15 20:27:42.953082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.655 [2024-07-15 20:27:42.953106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.655 qpair failed and we were unable to recover it. 00:29:17.655 [2024-07-15 20:27:42.963178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.655 [2024-07-15 20:27:42.963307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.655 [2024-07-15 20:27:42.963331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.655 [2024-07-15 20:27:42.963342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.655 [2024-07-15 20:27:42.963351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.655 [2024-07-15 20:27:42.963373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.655 qpair failed and we were unable to recover it. 00:29:17.655 [2024-07-15 20:27:42.972997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.655 [2024-07-15 20:27:42.973094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.655 [2024-07-15 20:27:42.973115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.655 [2024-07-15 20:27:42.973125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.655 [2024-07-15 20:27:42.973135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.655 [2024-07-15 20:27:42.973155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.655 qpair failed and we were unable to recover it. 00:29:17.655 [2024-07-15 20:27:42.983082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.655 [2024-07-15 20:27:42.983213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.655 [2024-07-15 20:27:42.983235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.655 [2024-07-15 20:27:42.983245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.655 [2024-07-15 20:27:42.983260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.655 [2024-07-15 20:27:42.983280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.655 qpair failed and we were unable to recover it. 00:29:17.655 [2024-07-15 20:27:42.993107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.655 [2024-07-15 20:27:42.993201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.655 [2024-07-15 20:27:42.993221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.655 [2024-07-15 20:27:42.993231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.655 [2024-07-15 20:27:42.993241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.655 [2024-07-15 20:27:42.993268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.655 qpair failed and we were unable to recover it. 00:29:17.914 [2024-07-15 20:27:43.003375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.914 [2024-07-15 20:27:43.003503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.914 [2024-07-15 20:27:43.003529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.914 [2024-07-15 20:27:43.003539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.914 [2024-07-15 20:27:43.003548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.914 [2024-07-15 20:27:43.003569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.914 qpair failed and we were unable to recover it. 00:29:17.914 [2024-07-15 20:27:43.013149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.914 [2024-07-15 20:27:43.013245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.914 [2024-07-15 20:27:43.013271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.914 [2024-07-15 20:27:43.013282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.914 [2024-07-15 20:27:43.013291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.914 [2024-07-15 20:27:43.013312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.914 qpair failed and we were unable to recover it. 00:29:17.914 [2024-07-15 20:27:43.023223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.914 [2024-07-15 20:27:43.023321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.914 [2024-07-15 20:27:43.023343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.914 [2024-07-15 20:27:43.023352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.914 [2024-07-15 20:27:43.023363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.914 [2024-07-15 20:27:43.023383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.914 qpair failed and we were unable to recover it. 00:29:17.914 [2024-07-15 20:27:43.033280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.914 [2024-07-15 20:27:43.033405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.914 [2024-07-15 20:27:43.033425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.914 [2024-07-15 20:27:43.033435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.914 [2024-07-15 20:27:43.033444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.914 [2024-07-15 20:27:43.033465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.914 qpair failed and we were unable to recover it. 00:29:17.914 [2024-07-15 20:27:43.043384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.914 [2024-07-15 20:27:43.043505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.914 [2024-07-15 20:27:43.043526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.914 [2024-07-15 20:27:43.043536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.914 [2024-07-15 20:27:43.043546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.914 [2024-07-15 20:27:43.043571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.914 qpair failed and we were unable to recover it. 00:29:17.914 [2024-07-15 20:27:43.053278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.914 [2024-07-15 20:27:43.053395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.914 [2024-07-15 20:27:43.053417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.053427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.053436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.053457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.063338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.063438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.063459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.063469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.063477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.063498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.073331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.073437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.073459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.073469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.073479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.073498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.083571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.083700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.083721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.083731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.083741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.083761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.093414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.093512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.093536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.093547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.093556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.093576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.103425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.103564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.103586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.103596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.103606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.103625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.113412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.113578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.113599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.113609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.113618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.113639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.123777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.123916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.123937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.123947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.123956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.123976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.133554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.133659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.133680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.133690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.133699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.133724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.143591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.143681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.143702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.143712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.143721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.143741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.153677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.153812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.153832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.153842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.153851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.153872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.163869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.163992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.164012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.164023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.164032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.164052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.173720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.173827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.173848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.173858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.173868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.173888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.183733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.183832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.183860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.183871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.183879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.183899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.915 [2024-07-15 20:27:43.193743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.915 [2024-07-15 20:27:43.193837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.915 [2024-07-15 20:27:43.193858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.915 [2024-07-15 20:27:43.193867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.915 [2024-07-15 20:27:43.193877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.915 [2024-07-15 20:27:43.193897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.915 qpair failed and we were unable to recover it. 00:29:17.916 [2024-07-15 20:27:43.204036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.916 [2024-07-15 20:27:43.204200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.916 [2024-07-15 20:27:43.204221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.916 [2024-07-15 20:27:43.204231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.916 [2024-07-15 20:27:43.204240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.916 [2024-07-15 20:27:43.204266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.916 qpair failed and we were unable to recover it. 00:29:17.916 [2024-07-15 20:27:43.213876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.916 [2024-07-15 20:27:43.213969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.916 [2024-07-15 20:27:43.213990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.916 [2024-07-15 20:27:43.214000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.916 [2024-07-15 20:27:43.214009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.916 [2024-07-15 20:27:43.214029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.916 qpair failed and we were unable to recover it. 00:29:17.916 [2024-07-15 20:27:43.223864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.916 [2024-07-15 20:27:43.223978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.916 [2024-07-15 20:27:43.223999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.916 [2024-07-15 20:27:43.224010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.916 [2024-07-15 20:27:43.224023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.916 [2024-07-15 20:27:43.224043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.916 qpair failed and we were unable to recover it. 00:29:17.916 [2024-07-15 20:27:43.233927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.916 [2024-07-15 20:27:43.234013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.916 [2024-07-15 20:27:43.234034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.916 [2024-07-15 20:27:43.234044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.916 [2024-07-15 20:27:43.234054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.916 [2024-07-15 20:27:43.234073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.916 qpair failed and we were unable to recover it. 00:29:17.916 [2024-07-15 20:27:43.244142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.916 [2024-07-15 20:27:43.244281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.916 [2024-07-15 20:27:43.244303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.916 [2024-07-15 20:27:43.244312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.916 [2024-07-15 20:27:43.244322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.916 [2024-07-15 20:27:43.244342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.916 qpair failed and we were unable to recover it. 00:29:17.916 [2024-07-15 20:27:43.253975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.916 [2024-07-15 20:27:43.254079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.916 [2024-07-15 20:27:43.254101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.916 [2024-07-15 20:27:43.254110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.916 [2024-07-15 20:27:43.254121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:17.916 [2024-07-15 20:27:43.254141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.916 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.264017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.264140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.264161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.264172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.264182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.264202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.274051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.274148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.274171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.274181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.274191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.274211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.284248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.284384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.284405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.284416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.284426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.284446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.294118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.294262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.294283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.294294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.294303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.294325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.304114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.304238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.304269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.304280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.304289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.304310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.314210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.314305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.314327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.314338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.314351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.314371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.324341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.324465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.324486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.324496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.324507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.324527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.334223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.334365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.334386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.334397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.334406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.334427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.344205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.344311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.344333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.344343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.344351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.344371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.354250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.354347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.354368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.354379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.354388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.354408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.364519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.364658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.364679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.364689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.364699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.364719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.374305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.374404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.374425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.374435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.374445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.374465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.384412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.384509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.384530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.384540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.384549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.384568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.175 [2024-07-15 20:27:43.394453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.175 [2024-07-15 20:27:43.394580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.175 [2024-07-15 20:27:43.394601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.175 [2024-07-15 20:27:43.394611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.175 [2024-07-15 20:27:43.394620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.175 [2024-07-15 20:27:43.394641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.175 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.404733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.404876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.404897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.404907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.404922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.404942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.414547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.414693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.414714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.414725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.414735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.414754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.424527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.424626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.424646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.424656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.424665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.424685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.434558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.434646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.434668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.434678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.434688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.434708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.444743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.444906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.444927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.444937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.444946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.444966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.454604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.454703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.454724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.454733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.454743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.454763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.464621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.464716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.464737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.464747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.464756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.464777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.474677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.474763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.474784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.474794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.474805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.474825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.484897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.485063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.485084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.485094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.485105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.485125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.494702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.494805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.494826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.494841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.494850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.494871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.504828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.504931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.504952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.504962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.504971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.504990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.176 [2024-07-15 20:27:43.514868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.176 [2024-07-15 20:27:43.514966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.176 [2024-07-15 20:27:43.514987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.176 [2024-07-15 20:27:43.514997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.176 [2024-07-15 20:27:43.515007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.176 [2024-07-15 20:27:43.515027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.176 qpair failed and we were unable to recover it. 00:29:18.437 [2024-07-15 20:27:43.525019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.437 [2024-07-15 20:27:43.525167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.437 [2024-07-15 20:27:43.525189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.437 [2024-07-15 20:27:43.525199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.437 [2024-07-15 20:27:43.525209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.437 [2024-07-15 20:27:43.525229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.437 qpair failed and we were unable to recover it. 00:29:18.437 [2024-07-15 20:27:43.534908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.437 [2024-07-15 20:27:43.535001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.437 [2024-07-15 20:27:43.535023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.437 [2024-07-15 20:27:43.535033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.437 [2024-07-15 20:27:43.535044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.437 [2024-07-15 20:27:43.535063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.437 qpair failed and we were unable to recover it. 00:29:18.437 [2024-07-15 20:27:43.544914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.437 [2024-07-15 20:27:43.545008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.437 [2024-07-15 20:27:43.545030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.437 [2024-07-15 20:27:43.545039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.437 [2024-07-15 20:27:43.545048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.437 [2024-07-15 20:27:43.545068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.437 qpair failed and we were unable to recover it. 00:29:18.437 [2024-07-15 20:27:43.554958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.437 [2024-07-15 20:27:43.555048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.437 [2024-07-15 20:27:43.555069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.437 [2024-07-15 20:27:43.555079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.437 [2024-07-15 20:27:43.555089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.437 [2024-07-15 20:27:43.555110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.437 qpair failed and we were unable to recover it. 00:29:18.437 [2024-07-15 20:27:43.565141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.437 [2024-07-15 20:27:43.565287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.437 [2024-07-15 20:27:43.565308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.437 [2024-07-15 20:27:43.565318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.437 [2024-07-15 20:27:43.565328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.437 [2024-07-15 20:27:43.565349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.574949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.575051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.575072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.575082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.575091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.575112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.585114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.585224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.585246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.585265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.585274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.585294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.595033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.595154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.595175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.595185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.595195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.595215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.605303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.605429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.605451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.605461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.605471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.605491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.615198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.615336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.615358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.615368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.615379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.615400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.625175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.625307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.625330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.625340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.625349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.625370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.635187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.635308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.635330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.635340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.635349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.635371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.645448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.645571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.645592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.645602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.645612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.645632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.655274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.655383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.655404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.655413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.655423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.655443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.665264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.665359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.665381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.665391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.665400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.665421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.675343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.675472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.675493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.675508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.675517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.675538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.685559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.685688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.685710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.685720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.685730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.685749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.695432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.695529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.695549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.695559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.695568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.695588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.705478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.705590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.438 [2024-07-15 20:27:43.705611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.438 [2024-07-15 20:27:43.705621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.438 [2024-07-15 20:27:43.705630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.438 [2024-07-15 20:27:43.705649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.438 qpair failed and we were unable to recover it. 00:29:18.438 [2024-07-15 20:27:43.715460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.438 [2024-07-15 20:27:43.715572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.715593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.715604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.715613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.715633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.439 [2024-07-15 20:27:43.725743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.439 [2024-07-15 20:27:43.725905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.725927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.725937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.725947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.725967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.439 [2024-07-15 20:27:43.735480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.439 [2024-07-15 20:27:43.735574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.735595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.735605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.735615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.735635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.439 [2024-07-15 20:27:43.745579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.439 [2024-07-15 20:27:43.745683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.745704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.745715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.745724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.745745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.439 [2024-07-15 20:27:43.755625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.439 [2024-07-15 20:27:43.755763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.755785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.755795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.755805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.755826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.439 [2024-07-15 20:27:43.765871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.439 [2024-07-15 20:27:43.765990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.766015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.766025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.766035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.766055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.439 [2024-07-15 20:27:43.775729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.439 [2024-07-15 20:27:43.775826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.439 [2024-07-15 20:27:43.775847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.439 [2024-07-15 20:27:43.775857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.439 [2024-07-15 20:27:43.775867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.439 [2024-07-15 20:27:43.775887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.439 qpair failed and we were unable to recover it. 00:29:18.698 [2024-07-15 20:27:43.785745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.698 [2024-07-15 20:27:43.785850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.698 [2024-07-15 20:27:43.785871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.698 [2024-07-15 20:27:43.785881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.698 [2024-07-15 20:27:43.785890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.698 [2024-07-15 20:27:43.785910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.698 qpair failed and we were unable to recover it. 00:29:18.698 [2024-07-15 20:27:43.795779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.698 [2024-07-15 20:27:43.795877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.698 [2024-07-15 20:27:43.795897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.698 [2024-07-15 20:27:43.795908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.698 [2024-07-15 20:27:43.795917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.795937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.806011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.806129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.806150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.806160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.806171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.806191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.815792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.815894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.815915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.815925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.815935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.815956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.825903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.826003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.826024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.826034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.826042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.826063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.835852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.835942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.835962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.835972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.835982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.836001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.846120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.846246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.846273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.846283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.846292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.846314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.855936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.856035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.856060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.856070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.856081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.856100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.866000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.866122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.866144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.866154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.866163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.866183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.876000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.876111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.876133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.876142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.876153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.876173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.886211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.886345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.886366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.886377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.886385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.886405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.896056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.896202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.896222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.896232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.896242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.896271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.906124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.906263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.906284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.906294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.906302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.906323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.916185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.916317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.916338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.916348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.916359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.916379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.926380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.926501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.926522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.699 [2024-07-15 20:27:43.926531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.699 [2024-07-15 20:27:43.926540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.699 [2024-07-15 20:27:43.926560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.699 qpair failed and we were unable to recover it. 00:29:18.699 [2024-07-15 20:27:43.936233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.699 [2024-07-15 20:27:43.936381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.699 [2024-07-15 20:27:43.936402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.936412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.936423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.936443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:43.946286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:43.946387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:43.946413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.946423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.946432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.946453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:43.956292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:43.956390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:43.956412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.956421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.956431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.956452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:43.966448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:43.966595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:43.966618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.966628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.966638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.966659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:43.976362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:43.976506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:43.976528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.976538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.976548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.976569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:43.986354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:43.986450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:43.986471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.986481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.986490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.986518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:43.996407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:43.996492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:43.996513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:43.996523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:43.996533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:43.996553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:44.006622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:44.006769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:44.006789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:44.006799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:44.006809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:44.006829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:44.016536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:44.016629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:44.016650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:44.016660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:44.016670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:44.016690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:44.026463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:44.026567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:44.026588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:44.026598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:44.026607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:44.026627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:44.036544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:44.036643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:44.036667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:44.036677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:44.036687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:44.036707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.700 [2024-07-15 20:27:44.046758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.700 [2024-07-15 20:27:44.046877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.700 [2024-07-15 20:27:44.046898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.700 [2024-07-15 20:27:44.046908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.700 [2024-07-15 20:27:44.046918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.700 [2024-07-15 20:27:44.046937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.700 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.056591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.056694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.056714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.056724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.056735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.056754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.066644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.066747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.066768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.066778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.066789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.066808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.076696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.076787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.076807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.076817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.076827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.076852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.086837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.086958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.086979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.086989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.086999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.087018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.096701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.096796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.096817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.096827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.096837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.096856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.106751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.106860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.106882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.106891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.106900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.106923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.116789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.116903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.116924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.116935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.116944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.116964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.126990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.127135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.127160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.127170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.127180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.127199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.136867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.136965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.136987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.136998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.137007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.137027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.146894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.147000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.147021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.147031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.147042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.147061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.156954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.157086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.157106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.157116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.157126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.157146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.167212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.167370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.167392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.167402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.167415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.167436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.177048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.961 [2024-07-15 20:27:44.177182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.961 [2024-07-15 20:27:44.177203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.961 [2024-07-15 20:27:44.177213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.961 [2024-07-15 20:27:44.177223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.961 [2024-07-15 20:27:44.177243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.961 qpair failed and we were unable to recover it. 00:29:18.961 [2024-07-15 20:27:44.187088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.187224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.187245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.187261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.187272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.187292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.197070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.197189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.197211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.197221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.197230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.197251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.207354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.207511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.207531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.207541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.207550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.207571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.217114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.217219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.217240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.217250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.217265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.217285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.227148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.227248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.227274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.227284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.227293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.227314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.237171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.237315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.237335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.237346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.237355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.237375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.247410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.247559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.247580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.247590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.247599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.247619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.257239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.257372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.257392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.257402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.257415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.257436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.267302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.267429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.267451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.267461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.267469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.267490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.277296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.277382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.277402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.277412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.277422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.277442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.287584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.287709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.287730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.287740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.287750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.287770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.297388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.297505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.297526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.297536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.297545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.297566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:18.962 [2024-07-15 20:27:44.307435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.962 [2024-07-15 20:27:44.307543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.962 [2024-07-15 20:27:44.307564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.962 [2024-07-15 20:27:44.307574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.962 [2024-07-15 20:27:44.307583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:18.962 [2024-07-15 20:27:44.307604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.962 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.317446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.317539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.317560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.317570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.317580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.317600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.327775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.327900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.327921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.327930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.327940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.327960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.337499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.337606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.337627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.337636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.337646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.337666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.347540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.347658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.347679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.347689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.347702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.347722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.357537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.357640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.357661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.357671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.357681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.357701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.367853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.367979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.368000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.368009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.368018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.368038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.377622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.377721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.377743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.377753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.377763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.377782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.387657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.387750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.387772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.387782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.387791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.387810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.397680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.397778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.397799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.397809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.397819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.397839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.407909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.408024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.408045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.223 [2024-07-15 20:27:44.408055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.223 [2024-07-15 20:27:44.408065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.223 [2024-07-15 20:27:44.408084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.223 qpair failed and we were unable to recover it. 00:29:19.223 [2024-07-15 20:27:44.417757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.223 [2024-07-15 20:27:44.417860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.223 [2024-07-15 20:27:44.417881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.417891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.417901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.417921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.427812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.427904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.427924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.427934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.427942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.427961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.437792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.437893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.437914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.437928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.437937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.437958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.448041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.448158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.448179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.448189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.448199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.448219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.457915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.458062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.458083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.458093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.458101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.458122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.467926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.468016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.468037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.468046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.468055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.468076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.478014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.478159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.478180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.478190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.478199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.478219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.488265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.488387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.488409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.488419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.488428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.488448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.498057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.498189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.498210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.498220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.498230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.498249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.508108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.508200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.508220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.508231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.508239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.508265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.518087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.518203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.518224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.518234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.518244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.518269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.528312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.528438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.528459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.528473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.528482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.528502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.538219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.538328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.538349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.538359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.538367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.538388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.548153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.548242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.548268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.548279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.548288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.548308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.224 qpair failed and we were unable to recover it. 00:29:19.224 [2024-07-15 20:27:44.558171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.224 [2024-07-15 20:27:44.558276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.224 [2024-07-15 20:27:44.558298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.224 [2024-07-15 20:27:44.558308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.224 [2024-07-15 20:27:44.558318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.224 [2024-07-15 20:27:44.558338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.225 [2024-07-15 20:27:44.568450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.225 [2024-07-15 20:27:44.568572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.225 [2024-07-15 20:27:44.568593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.225 [2024-07-15 20:27:44.568602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.225 [2024-07-15 20:27:44.568613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.225 [2024-07-15 20:27:44.568632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.225 qpair failed and we were unable to recover it. 00:29:19.484 [2024-07-15 20:27:44.578279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.484 [2024-07-15 20:27:44.578380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.484 [2024-07-15 20:27:44.578400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.484 [2024-07-15 20:27:44.578410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.484 [2024-07-15 20:27:44.578419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.484 [2024-07-15 20:27:44.578439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.484 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.588313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.588402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.588423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.588433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.588443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.588462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.598340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.598438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.598459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.598469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.598479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.598498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.608569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.608698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.608718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.608728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.608738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.608757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.618487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.618603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.618624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.618638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.618647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.618667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.628435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.628528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.628550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.628560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.628569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.628588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.638509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.638614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.638635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.638645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.638655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.638675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.648724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.648845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.648866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.648875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.648885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.648905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.658616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.658758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.658779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.658789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.658798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.658818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.668605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.668699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.668720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.668730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.668740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.668760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.678562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.678686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.678707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.678717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.678728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.678748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.688844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.688979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.689001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.689011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.689021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.485 [2024-07-15 20:27:44.689041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.485 qpair failed and we were unable to recover it. 00:29:19.485 [2024-07-15 20:27:44.698696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.485 [2024-07-15 20:27:44.698795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.485 [2024-07-15 20:27:44.698816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.485 [2024-07-15 20:27:44.698826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.485 [2024-07-15 20:27:44.698835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.698856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.708728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.708826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.708849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.708864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.708876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.708896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.718770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.718881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.718903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.718914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.718923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.718944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.729042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.729174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.729194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.729204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.729214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.729233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.738843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.738960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.738981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.738990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.738999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.739020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.748810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.748926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.748948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.748958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.748968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.748987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.758917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.759008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.759029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.759039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.759049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.759070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.769113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.769270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.769292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.769302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.769311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.769332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.778960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.779063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.779084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.779094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.779103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.779123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.788981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.789070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.789091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.789102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.789112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.789131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.799013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.799128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.799157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.799168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.799177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.799197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.809222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.809373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.809395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.809405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.809414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.809435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.819047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.819148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.819170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.819180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.819189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.819210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.486 [2024-07-15 20:27:44.829018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.486 [2024-07-15 20:27:44.829109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.486 [2024-07-15 20:27:44.829131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.486 [2024-07-15 20:27:44.829142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.486 [2024-07-15 20:27:44.829151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.486 [2024-07-15 20:27:44.829172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.486 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.839031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.839130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.839151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.839161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.839171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.839196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.849367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.849490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.849511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.849520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.849530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.849550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.859129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.859275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.859295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.859304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.859313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.859334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.869152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.869245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.869271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.869281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.869291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.869311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.879178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.879273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.879294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.879304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.879314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.879334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.889490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.889618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.889642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.889652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.889662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.889682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.899371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.899466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.899487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.899497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.899506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.899525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.909355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.909453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.909474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.909484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.909494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.909514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.919350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.746 [2024-07-15 20:27:44.919443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.746 [2024-07-15 20:27:44.919464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.746 [2024-07-15 20:27:44.919474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.746 [2024-07-15 20:27:44.919484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.746 [2024-07-15 20:27:44.919504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.746 qpair failed and we were unable to recover it. 00:29:19.746 [2024-07-15 20:27:44.929591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.929712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.929733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.929743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.929754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.929779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.939450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.939599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.939620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.939629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.939639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.939658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.949415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.949513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.949534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.949544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.949554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.949573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.959515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.959616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.959636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.959646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.959656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.959675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.969743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.969870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.969894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.969904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.969915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.969935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.979529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.979629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.979655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.979665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.979674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.979695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.989640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.989739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.989760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.989770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.989780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.989799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:44.999652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:44.999773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:44.999793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:44.999804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:44.999813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:44.999835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.009857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.009982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.010003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:45.010013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:45.010023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:45.010044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.019713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.019831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.019852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:45.019862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:45.019871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:45.019895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.029678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.029772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.029793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:45.029803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:45.029813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:45.029833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.039783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.039882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.039903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:45.039913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:45.039923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:45.039944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.050006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.050127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.050148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:45.050158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:45.050169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:45.050189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.059781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.059888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.059909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.747 [2024-07-15 20:27:45.059919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.747 [2024-07-15 20:27:45.059928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.747 [2024-07-15 20:27:45.059949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.747 qpair failed and we were unable to recover it. 00:29:19.747 [2024-07-15 20:27:45.069833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.747 [2024-07-15 20:27:45.069927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.747 [2024-07-15 20:27:45.069952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.748 [2024-07-15 20:27:45.069962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.748 [2024-07-15 20:27:45.069972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.748 [2024-07-15 20:27:45.069992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.748 qpair failed and we were unable to recover it. 00:29:19.748 [2024-07-15 20:27:45.079845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.748 [2024-07-15 20:27:45.079941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.748 [2024-07-15 20:27:45.079962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.748 [2024-07-15 20:27:45.079973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.748 [2024-07-15 20:27:45.079982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.748 [2024-07-15 20:27:45.080002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.748 qpair failed and we were unable to recover it. 00:29:19.748 [2024-07-15 20:27:45.090077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.748 [2024-07-15 20:27:45.090200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.748 [2024-07-15 20:27:45.090221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.748 [2024-07-15 20:27:45.090231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.748 [2024-07-15 20:27:45.090241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:19.748 [2024-07-15 20:27:45.090268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.748 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.099873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.100008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.100029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-15 20:27:45.100039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-15 20:27:45.100049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.007 [2024-07-15 20:27:45.100069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.110023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.110117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.110138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-15 20:27:45.110148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-15 20:27:45.110162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.007 [2024-07-15 20:27:45.110182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.120005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.120095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.120116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-15 20:27:45.120126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-15 20:27:45.120136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.007 [2024-07-15 20:27:45.120156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.130338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.130499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.130520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-15 20:27:45.130530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-15 20:27:45.130540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.007 [2024-07-15 20:27:45.130560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.140095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.140231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.140253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-15 20:27:45.140269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-15 20:27:45.140279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.007 [2024-07-15 20:27:45.140300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.150105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.150227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.150248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.007 [2024-07-15 20:27:45.150265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.007 [2024-07-15 20:27:45.150275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.007 [2024-07-15 20:27:45.150295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.007 qpair failed and we were unable to recover it. 00:29:20.007 [2024-07-15 20:27:45.160147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.007 [2024-07-15 20:27:45.160268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.007 [2024-07-15 20:27:45.160290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.160301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.160310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.160330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.170390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.170513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.170535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.170545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.170555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.170575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.180235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.180343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.180365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.180375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.180383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.180405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.190249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.190348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.190369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.190379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.190389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.190409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.200279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.200379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.200400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.200410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.200424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.200445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.210552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.210669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.210690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.210700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.210709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.210729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.220408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.220514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.220536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.220546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.220555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.220577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.230362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.230452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.230473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.230483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.230493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.230513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.240420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.240514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.240535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.240545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.240555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.240575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.250693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.250828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.250850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.250861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.250870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.250890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.260541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.260683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.260704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.260714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.260723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.260744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.270516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.270607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.270629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.270639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.270648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.270668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.280580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.280702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.280723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.280732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.280743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.280763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.290814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.290961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.290982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.290992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.291006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.008 [2024-07-15 20:27:45.291026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.008 qpair failed and we were unable to recover it. 00:29:20.008 [2024-07-15 20:27:45.300605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.008 [2024-07-15 20:27:45.300700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.008 [2024-07-15 20:27:45.300722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.008 [2024-07-15 20:27:45.300732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.008 [2024-07-15 20:27:45.300741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.009 [2024-07-15 20:27:45.300762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.009 qpair failed and we were unable to recover it. 00:29:20.009 [2024-07-15 20:27:45.310660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.009 [2024-07-15 20:27:45.310758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.009 [2024-07-15 20:27:45.310779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.009 [2024-07-15 20:27:45.310788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.009 [2024-07-15 20:27:45.310797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.009 [2024-07-15 20:27:45.310817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.009 qpair failed and we were unable to recover it. 00:29:20.009 [2024-07-15 20:27:45.320651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.009 [2024-07-15 20:27:45.320779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.009 [2024-07-15 20:27:45.320801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.009 [2024-07-15 20:27:45.320811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.009 [2024-07-15 20:27:45.320821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.009 [2024-07-15 20:27:45.320841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.009 qpair failed and we were unable to recover it. 00:29:20.009 [2024-07-15 20:27:45.330906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.009 [2024-07-15 20:27:45.331055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.009 [2024-07-15 20:27:45.331076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.009 [2024-07-15 20:27:45.331086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.009 [2024-07-15 20:27:45.331096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.009 [2024-07-15 20:27:45.331116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.009 qpair failed and we were unable to recover it. 00:29:20.009 [2024-07-15 20:27:45.340682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.009 [2024-07-15 20:27:45.340793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.009 [2024-07-15 20:27:45.340814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.009 [2024-07-15 20:27:45.340824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.009 [2024-07-15 20:27:45.340833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.009 [2024-07-15 20:27:45.340853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.009 qpair failed and we were unable to recover it. 00:29:20.009 [2024-07-15 20:27:45.350795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.009 [2024-07-15 20:27:45.350905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.009 [2024-07-15 20:27:45.350926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.009 [2024-07-15 20:27:45.350937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.009 [2024-07-15 20:27:45.350947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.009 [2024-07-15 20:27:45.350967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.009 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.360804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.360901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.360923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.360933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.360943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.360963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.371040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.371157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.371178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.371188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.371198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.371217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.380928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.381059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.381080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.381095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.381104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.381124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.390910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.391005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.391026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.391036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.391045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.391064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.400995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.401093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.401114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.401124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.401134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.401153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.411182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.411309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.411331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.411341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.411351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.411370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.420998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.421130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.421151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.421161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.421170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.421190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.431038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.431124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.431145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.431155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.431164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.431184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.441111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.441202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.441223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.441234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.441243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.441269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.451330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.451450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.451471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.451480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.451490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.451510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.461161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.461263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.461284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.461293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.461303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.270 [2024-07-15 20:27:45.461323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.270 qpair failed and we were unable to recover it. 00:29:20.270 [2024-07-15 20:27:45.471203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.270 [2024-07-15 20:27:45.471300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.270 [2024-07-15 20:27:45.471321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.270 [2024-07-15 20:27:45.471336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.270 [2024-07-15 20:27:45.471348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.471367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.481218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.481327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.481349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.481359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.481368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.481390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.491421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.491541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.491562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.491572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.491582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.491601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.501290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.501398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.501419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.501429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.501438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.501458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.511316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.511435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.511456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.511466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.511476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.511496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.521337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.521473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.521495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.521505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.521515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.521535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.531573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.531695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.531716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.531727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.531736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.531756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.541350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.541450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.541471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.541481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.541492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.541512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.551495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.551595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.551615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.551625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.551635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.551656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.561476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.561566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.561588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.561602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.561611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.561632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.571722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.571843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.571863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.571874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.571883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.571903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.581613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.581714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.581734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.581744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.581753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.581774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.591570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.591668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.591689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.591699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.591709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.591729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.601594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.601690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.601711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.601721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.601732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.601752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.271 [2024-07-15 20:27:45.611895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.271 [2024-07-15 20:27:45.612015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.271 [2024-07-15 20:27:45.612037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.271 [2024-07-15 20:27:45.612047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.271 [2024-07-15 20:27:45.612057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.271 [2024-07-15 20:27:45.612077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.271 qpair failed and we were unable to recover it. 00:29:20.531 [2024-07-15 20:27:45.621668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.531 [2024-07-15 20:27:45.621774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.531 [2024-07-15 20:27:45.621796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.531 [2024-07-15 20:27:45.621807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.531 [2024-07-15 20:27:45.621817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc90d70 00:29:20.531 [2024-07-15 20:27:45.621837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-07-15 20:27:45.631715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.531 [2024-07-15 20:27:45.631806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.531 [2024-07-15 20:27:45.631828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.531 [2024-07-15 20:27:45.631836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.531 [2024-07-15 20:27:45.631842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:20.531 [2024-07-15 20:27:45.631858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-07-15 20:27:45.641732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.531 [2024-07-15 20:27:45.641811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.531 [2024-07-15 20:27:45.641827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.531 [2024-07-15 20:27:45.641833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.531 [2024-07-15 20:27:45.641839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3704000b90 00:29:20.531 [2024-07-15 20:27:45.641853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.531 qpair failed and we were unable to recover it. 00:29:20.531 [2024-07-15 20:27:45.641921] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:20.531 A controller has encountered a failure and is being reset. 00:29:20.531 Controller properly reset. 00:29:20.531 Initializing NVMe Controllers 00:29:20.531 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:20.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:20.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:20.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:20.531 Initialization complete. Launching workers. 00:29:20.531 Starting thread on core 1 00:29:20.531 Starting thread on core 2 00:29:20.531 Starting thread on core 3 00:29:20.531 Starting thread on core 0 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:20.531 00:29:20.531 real 0m11.358s 00:29:20.531 user 0m21.264s 00:29:20.531 sys 0m4.141s 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.531 ************************************ 00:29:20.531 END TEST nvmf_target_disconnect_tc2 00:29:20.531 ************************************ 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.531 rmmod nvme_tcp 00:29:20.531 rmmod nvme_fabrics 00:29:20.531 rmmod nvme_keyring 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 217563 ']' 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 217563 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 217563 ']' 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 217563 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 217563 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 217563' 00:29:20.531 killing process with pid 217563 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 217563 00:29:20.531 20:27:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 217563 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:21.097 20:27:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.999 20:27:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:22.999 00:29:22.999 real 0m19.692s 00:29:22.999 user 0m48.170s 00:29:22.999 sys 0m8.809s 00:29:22.999 20:27:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.999 20:27:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:22.999 ************************************ 00:29:22.999 END TEST nvmf_target_disconnect 00:29:22.999 ************************************ 00:29:22.999 20:27:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:22.999 20:27:48 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:22.999 20:27:48 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:22.999 20:27:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.999 20:27:48 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:22.999 00:29:22.999 real 22m22.427s 00:29:22.999 user 49m44.077s 00:29:22.999 sys 6m28.937s 00:29:22.999 20:27:48 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.999 20:27:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.999 ************************************ 00:29:22.999 END TEST nvmf_tcp 00:29:22.999 ************************************ 00:29:22.999 20:27:48 -- common/autotest_common.sh@1142 -- # return 0 00:29:22.999 20:27:48 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:23.258 20:27:48 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:23.258 20:27:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:23.258 20:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.258 20:27:48 -- common/autotest_common.sh@10 -- # set +x 00:29:23.258 ************************************ 00:29:23.258 START TEST spdkcli_nvmf_tcp 00:29:23.258 ************************************ 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:23.258 * Looking for test storage... 00:29:23.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.258 20:27:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=219281 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 219281 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 219281 ']' 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.259 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.259 [2024-07-15 20:27:48.545262] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:29:23.259 [2024-07-15 20:27:48.545319] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid219281 ] 00:29:23.259 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.517 [2024-07-15 20:27:48.625607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:23.517 [2024-07-15 20:27:48.717596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.517 [2024-07-15 20:27:48.717601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:23.517 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:23.518 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.518 20:27:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.518 20:27:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:23.518 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:23.518 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:23.518 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:23.518 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:23.518 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:23.518 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:23.518 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:23.518 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:23.518 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:23.518 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:23.518 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:23.518 ' 00:29:26.052 [2024-07-15 20:27:51.279171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.430 [2024-07-15 20:27:52.455419] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:29.333 [2024-07-15 20:27:54.618563] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:31.237 [2024-07-15 20:27:56.476737] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:32.614 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:32.614 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:32.614 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:32.614 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:32.614 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:32.614 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:32.614 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:32.614 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:32.614 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:32.614 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:32.614 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:32.614 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:32.873 20:27:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:33.131 20:27:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.389 20:27:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:33.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:33.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:33.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:33.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:33.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:33.389 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:33.389 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:33.389 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:33.389 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:33.389 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:33.389 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:33.389 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:33.389 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:33.389 ' 00:29:38.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:38.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:38.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:38.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:38.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:38.657 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:38.657 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:38.657 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:38.657 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:38.657 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:38.657 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:38.657 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:38.657 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:38.657 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 219281 ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 219281' 00:29:38.657 killing process with pid 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 219281 ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 219281 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 219281 ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 219281 00:29:38.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (219281) - No such process 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 219281 is not found' 00:29:38.657 Process with pid 219281 is not found 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:38.657 00:29:38.657 real 0m15.435s 00:29:38.657 user 0m31.877s 00:29:38.657 sys 0m0.741s 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:38.657 20:28:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:38.657 ************************************ 00:29:38.657 END TEST spdkcli_nvmf_tcp 00:29:38.657 ************************************ 00:29:38.657 20:28:03 -- common/autotest_common.sh@1142 -- # return 0 00:29:38.657 20:28:03 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:38.657 20:28:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:38.657 20:28:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.657 20:28:03 -- common/autotest_common.sh@10 -- # set +x 00:29:38.657 ************************************ 00:29:38.657 START TEST nvmf_identify_passthru 00:29:38.657 ************************************ 00:29:38.657 20:28:03 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:38.657 * Looking for test storage... 00:29:38.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:38.657 20:28:03 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.657 20:28:03 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.657 20:28:03 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.657 20:28:03 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.657 20:28:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.657 20:28:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.657 20:28:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.657 20:28:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:38.657 20:28:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.657 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:38.658 20:28:03 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.658 20:28:03 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.658 20:28:03 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.658 20:28:03 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.658 20:28:03 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 20:28:03 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 20:28:03 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 20:28:03 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:38.658 20:28:03 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.658 20:28:03 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.658 20:28:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:38.658 20:28:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:38.658 20:28:03 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:38.658 20:28:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:43.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:43.927 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:43.927 Found net devices under 0000:af:00.0: cvl_0_0 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:43.927 Found net devices under 0000:af:00.1: cvl_0_1 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.927 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.928 20:28:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:43.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:29:43.928 00:29:43.928 --- 10.0.0.2 ping statistics --- 00:29:43.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.928 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:29:43.928 00:29:43.928 --- 10.0.0.1 ping statistics --- 00:29:43.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.928 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:43.928 20:28:09 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:29:44.189 20:28:09 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:86:00.0 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:44.189 20:28:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:44.189 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.453 20:28:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:29:48.453 20:28:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:29:48.453 20:28:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:48.453 20:28:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:48.453 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=226479 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.646 20:28:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 226479 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 226479 ']' 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.646 20:28:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.646 [2024-07-15 20:28:17.986621] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:29:52.646 [2024-07-15 20:28:17.986682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.906 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.906 [2024-07-15 20:28:18.070722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.906 [2024-07-15 20:28:18.163515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.906 [2024-07-15 20:28:18.163557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.906 [2024-07-15 20:28:18.163567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.906 [2024-07-15 20:28:18.163575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.906 [2024-07-15 20:28:18.163582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.906 [2024-07-15 20:28:18.163635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.906 [2024-07-15 20:28:18.163733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.907 [2024-07-15 20:28:18.163822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.907 [2024-07-15 20:28:18.163824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:52.907 20:28:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.907 INFO: Log level set to 20 00:29:52.907 INFO: Requests: 00:29:52.907 { 00:29:52.907 "jsonrpc": "2.0", 00:29:52.907 "method": "nvmf_set_config", 00:29:52.907 "id": 1, 00:29:52.907 "params": { 00:29:52.907 "admin_cmd_passthru": { 00:29:52.907 "identify_ctrlr": true 00:29:52.907 } 00:29:52.907 } 00:29:52.907 } 00:29:52.907 00:29:52.907 INFO: response: 00:29:52.907 { 00:29:52.907 "jsonrpc": "2.0", 00:29:52.907 "id": 1, 00:29:52.907 "result": true 00:29:52.907 } 00:29:52.907 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.907 20:28:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.907 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:52.907 INFO: Setting log level to 20 00:29:52.907 INFO: Setting log level to 20 00:29:52.907 INFO: Log level set to 20 00:29:52.907 INFO: Log level set to 20 00:29:52.907 INFO: Requests: 00:29:52.907 { 00:29:52.907 "jsonrpc": "2.0", 00:29:52.907 "method": "framework_start_init", 00:29:52.907 "id": 1 00:29:52.907 } 00:29:52.907 00:29:52.907 INFO: Requests: 00:29:52.907 { 00:29:52.907 "jsonrpc": "2.0", 00:29:52.907 "method": "framework_start_init", 00:29:52.907 "id": 1 00:29:52.907 } 00:29:52.907 00:29:53.166 [2024-07-15 20:28:18.285282] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:53.166 INFO: response: 00:29:53.166 { 00:29:53.166 "jsonrpc": "2.0", 00:29:53.166 "id": 1, 00:29:53.166 "result": true 00:29:53.166 } 00:29:53.166 00:29:53.166 INFO: response: 00:29:53.166 { 00:29:53.166 "jsonrpc": "2.0", 00:29:53.166 "id": 1, 00:29:53.166 "result": true 00:29:53.166 } 00:29:53.166 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.166 20:28:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.166 INFO: Setting log level to 40 00:29:53.166 INFO: Setting log level to 40 00:29:53.166 INFO: Setting log level to 40 00:29:53.166 [2024-07-15 20:28:18.299282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.166 20:28:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.166 20:28:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.166 20:28:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.455 Nvme0n1 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.455 [2024-07-15 20:28:21.225596] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.455 [ 00:29:56.455 { 00:29:56.455 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:56.455 "subtype": "Discovery", 00:29:56.455 "listen_addresses": [], 00:29:56.455 "allow_any_host": true, 00:29:56.455 "hosts": [] 00:29:56.455 }, 00:29:56.455 { 00:29:56.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.455 "subtype": "NVMe", 00:29:56.455 "listen_addresses": [ 00:29:56.455 { 00:29:56.455 "trtype": "TCP", 00:29:56.455 "adrfam": "IPv4", 00:29:56.455 "traddr": "10.0.0.2", 00:29:56.455 "trsvcid": "4420" 00:29:56.455 } 00:29:56.455 ], 00:29:56.455 "allow_any_host": true, 00:29:56.455 "hosts": [], 00:29:56.455 "serial_number": "SPDK00000000000001", 00:29:56.455 "model_number": "SPDK bdev Controller", 00:29:56.455 "max_namespaces": 1, 00:29:56.455 "min_cntlid": 1, 00:29:56.455 "max_cntlid": 65519, 00:29:56.455 "namespaces": [ 00:29:56.455 { 00:29:56.455 "nsid": 1, 00:29:56.455 "bdev_name": "Nvme0n1", 00:29:56.455 "name": "Nvme0n1", 00:29:56.455 "nguid": "CD5DD6AE9AE94476A50D658A4A6023D7", 00:29:56.455 "uuid": "cd5dd6ae-9ae9-4476-a50d-658a4a6023d7" 00:29:56.455 } 00:29:56.455 ] 00:29:56.455 } 00:29:56.455 ] 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:56.455 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:56.455 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:56.455 20:28:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.455 rmmod nvme_tcp 00:29:56.455 rmmod nvme_fabrics 00:29:56.455 rmmod nvme_keyring 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 226479 ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 226479 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 226479 ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 226479 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 226479 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 226479' 00:29:56.455 killing process with pid 226479 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 226479 00:29:56.455 20:28:21 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 226479 00:29:58.359 20:28:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.359 20:28:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.359 20:28:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.359 20:28:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.359 20:28:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.359 20:28:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.359 20:28:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:58.359 20:28:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.262 20:28:25 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.263 00:30:00.263 real 0m21.484s 00:30:00.263 user 0m27.888s 00:30:00.263 sys 0m4.853s 00:30:00.263 20:28:25 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:00.263 20:28:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:00.263 ************************************ 00:30:00.263 END TEST nvmf_identify_passthru 00:30:00.263 ************************************ 00:30:00.263 20:28:25 -- common/autotest_common.sh@1142 -- # return 0 00:30:00.263 20:28:25 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:00.263 20:28:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:00.263 20:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.263 20:28:25 -- common/autotest_common.sh@10 -- # set +x 00:30:00.263 ************************************ 00:30:00.263 START TEST nvmf_dif 00:30:00.263 ************************************ 00:30:00.263 20:28:25 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:00.263 * Looking for test storage... 00:30:00.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.263 20:28:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.263 20:28:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.263 20:28:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.263 20:28:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.263 20:28:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.263 20:28:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.263 20:28:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.263 20:28:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:00.263 20:28:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.263 20:28:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:00.263 20:28:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:00.263 20:28:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:00.263 20:28:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:00.263 20:28:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.263 20:28:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:00.263 20:28:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.263 20:28:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.263 20:28:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:05.537 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:05.537 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:05.537 Found net devices under 0000:af:00.0: cvl_0_0 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:05.537 Found net devices under 0000:af:00.1: cvl_0_1 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.537 20:28:30 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:30:05.538 00:30:05.538 --- 10.0.0.2 ping statistics --- 00:30:05.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.538 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:30:05.538 00:30:05.538 --- 10.0.0.1 ping statistics --- 00:30:05.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.538 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:05.538 20:28:30 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:08.078 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:08.078 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:08.078 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.337 20:28:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:08.337 20:28:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=232153 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 232153 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 232153 ']' 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.337 20:28:33 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:08.337 20:28:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:08.337 [2024-07-15 20:28:33.666192] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:30:08.337 [2024-07-15 20:28:33.666250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.595 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.595 [2024-07-15 20:28:33.752104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.595 [2024-07-15 20:28:33.841203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.595 [2024-07-15 20:28:33.841245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.595 [2024-07-15 20:28:33.841261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.595 [2024-07-15 20:28:33.841270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.595 [2024-07-15 20:28:33.841277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.595 [2024-07-15 20:28:33.841299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.532 20:28:34 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.532 20:28:34 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:09.532 20:28:34 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.532 20:28:34 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.532 20:28:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 20:28:34 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.791 20:28:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:09.791 20:28:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:09.791 20:28:34 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.791 20:28:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 [2024-07-15 20:28:34.887944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.791 20:28:34 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.791 20:28:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:09.791 20:28:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:09.791 20:28:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.791 20:28:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 ************************************ 00:30:09.791 START TEST fio_dif_1_default 00:30:09.791 ************************************ 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 bdev_null0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:09.791 [2024-07-15 20:28:34.952216] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.791 { 00:30:09.791 "params": { 00:30:09.791 "name": "Nvme$subsystem", 00:30:09.791 "trtype": "$TEST_TRANSPORT", 00:30:09.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.791 "adrfam": "ipv4", 00:30:09.791 "trsvcid": "$NVMF_PORT", 00:30:09.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.791 "hdgst": ${hdgst:-false}, 00:30:09.791 "ddgst": ${ddgst:-false} 00:30:09.791 }, 00:30:09.791 "method": "bdev_nvme_attach_controller" 00:30:09.791 } 00:30:09.791 EOF 00:30:09.791 )") 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:09.791 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:09.792 "params": { 00:30:09.792 "name": "Nvme0", 00:30:09.792 "trtype": "tcp", 00:30:09.792 "traddr": "10.0.0.2", 00:30:09.792 "adrfam": "ipv4", 00:30:09.792 "trsvcid": "4420", 00:30:09.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.792 "hdgst": false, 00:30:09.792 "ddgst": false 00:30:09.792 }, 00:30:09.792 "method": "bdev_nvme_attach_controller" 00:30:09.792 }' 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:09.792 20:28:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:09.792 20:28:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:09.792 20:28:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:09.792 20:28:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:09.792 20:28:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.050 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:10.050 fio-3.35 00:30:10.050 Starting 1 thread 00:30:10.309 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.524 00:30:22.524 filename0: (groupid=0, jobs=1): err= 0: pid=232721: Mon Jul 15 20:28:46 2024 00:30:22.524 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10029msec) 00:30:22.524 slat (nsec): min=9096, max=83949, avg=9560.74, stdev=2018.40 00:30:22.524 clat (usec): min=635, max=45841, avg=21221.30, stdev=20475.37 00:30:22.524 lat (usec): min=644, max=45872, avg=21230.86, stdev=20475.36 00:30:22.524 clat percentiles (usec): 00:30:22.524 | 1.00th=[ 644], 5.00th=[ 652], 10.00th=[ 652], 20.00th=[ 660], 00:30:22.524 | 30.00th=[ 668], 40.00th=[ 676], 50.00th=[41157], 60.00th=[41157], 00:30:22.524 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:30:22.524 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:30:22.524 | 99.99th=[45876] 00:30:22.524 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=753.60, stdev=30.22, samples=20 00:30:22.524 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:30:22.524 lat (usec) : 750=49.36%, 1000=0.42% 00:30:22.524 lat (msec) : 50=50.21% 00:30:22.524 cpu : usr=94.83%, sys=4.85%, ctx=20, majf=0, minf=196 00:30:22.524 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.524 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.524 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:22.524 00:30:22.524 Run status group 0 (all jobs): 00:30:22.524 READ: bw=753KiB/s (771kB/s), 753KiB/s-753KiB/s (771kB/s-771kB/s), io=7552KiB (7733kB), run=10029-10029msec 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.524 00:30:22.524 real 0m11.293s 00:30:22.524 user 0m21.309s 00:30:22.524 sys 0m0.804s 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.524 ************************************ 00:30:22.524 END TEST fio_dif_1_default 00:30:22.524 ************************************ 00:30:22.524 20:28:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:22.524 20:28:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:22.524 20:28:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:22.524 20:28:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.524 20:28:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:22.524 ************************************ 00:30:22.524 START TEST fio_dif_1_multi_subsystems 00:30:22.524 ************************************ 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.524 bdev_null0 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.524 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 [2024-07-15 20:28:46.314574] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 bdev_null1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.525 { 00:30:22.525 "params": { 00:30:22.525 "name": "Nvme$subsystem", 00:30:22.525 "trtype": "$TEST_TRANSPORT", 00:30:22.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.525 "adrfam": "ipv4", 00:30:22.525 "trsvcid": "$NVMF_PORT", 00:30:22.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.525 "hdgst": ${hdgst:-false}, 00:30:22.525 "ddgst": ${ddgst:-false} 00:30:22.525 }, 00:30:22.525 "method": "bdev_nvme_attach_controller" 00:30:22.525 } 00:30:22.525 EOF 00:30:22.525 )") 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.525 { 00:30:22.525 "params": { 00:30:22.525 "name": "Nvme$subsystem", 00:30:22.525 "trtype": "$TEST_TRANSPORT", 00:30:22.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.525 "adrfam": "ipv4", 00:30:22.525 "trsvcid": "$NVMF_PORT", 00:30:22.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.525 "hdgst": ${hdgst:-false}, 00:30:22.525 "ddgst": ${ddgst:-false} 00:30:22.525 }, 00:30:22.525 "method": "bdev_nvme_attach_controller" 00:30:22.525 } 00:30:22.525 EOF 00:30:22.525 )") 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:22.525 "params": { 00:30:22.525 "name": "Nvme0", 00:30:22.525 "trtype": "tcp", 00:30:22.525 "traddr": "10.0.0.2", 00:30:22.525 "adrfam": "ipv4", 00:30:22.525 "trsvcid": "4420", 00:30:22.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.525 "hdgst": false, 00:30:22.525 "ddgst": false 00:30:22.525 }, 00:30:22.525 "method": "bdev_nvme_attach_controller" 00:30:22.525 },{ 00:30:22.525 "params": { 00:30:22.525 "name": "Nvme1", 00:30:22.525 "trtype": "tcp", 00:30:22.525 "traddr": "10.0.0.2", 00:30:22.525 "adrfam": "ipv4", 00:30:22.525 "trsvcid": "4420", 00:30:22.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:22.525 "hdgst": false, 00:30:22.525 "ddgst": false 00:30:22.525 }, 00:30:22.525 "method": "bdev_nvme_attach_controller" 00:30:22.525 }' 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:22.525 20:28:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.525 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:22.525 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:22.525 fio-3.35 00:30:22.525 Starting 2 threads 00:30:22.525 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.505 00:30:32.505 filename0: (groupid=0, jobs=1): err= 0: pid=234837: Mon Jul 15 20:28:57 2024 00:30:32.505 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10019msec) 00:30:32.505 slat (nsec): min=9271, max=25043, avg=11112.48, stdev=2775.87 00:30:32.505 clat (usec): min=40812, max=42108, avg=41029.21, stdev=242.58 00:30:32.505 lat (usec): min=40821, max=42123, avg=41040.33, stdev=242.79 00:30:32.505 clat percentiles (usec): 00:30:32.505 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:32.505 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:32.505 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:30:32.505 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:32.505 | 99.99th=[42206] 00:30:32.505 bw ( KiB/s): min= 384, max= 416, per=49.79%, avg=388.80, stdev=11.72, samples=20 00:30:32.505 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:32.505 lat (msec) : 50=100.00% 00:30:32.505 cpu : usr=97.35%, sys=2.36%, ctx=10, majf=0, minf=42 00:30:32.505 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.505 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.505 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:32.505 filename1: (groupid=0, jobs=1): err= 0: pid=234838: Mon Jul 15 20:28:57 2024 00:30:32.505 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:30:32.505 slat (nsec): min=9256, max=33081, avg=11065.75, stdev=2832.46 00:30:32.505 clat (usec): min=40807, max=42003, avg=41012.89, stdev=202.15 00:30:32.505 lat (usec): min=40816, max=42018, avg=41023.95, stdev=202.41 00:30:32.505 clat percentiles (usec): 00:30:32.505 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:32.505 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:32.505 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:32.505 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:32.505 | 99.99th=[42206] 00:30:32.505 bw ( KiB/s): min= 384, max= 416, per=49.79%, avg=388.80, stdev=11.72, samples=20 00:30:32.505 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:32.505 lat (msec) : 50=100.00% 00:30:32.505 cpu : usr=97.11%, sys=2.60%, ctx=13, majf=0, minf=180 00:30:32.505 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:32.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.506 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.506 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:32.506 00:30:32.506 Run status group 0 (all jobs): 00:30:32.506 READ: bw=779KiB/s (798kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10015-10019msec 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 00:30:32.506 real 0m11.323s 00:30:32.506 user 0m30.660s 00:30:32.506 sys 0m0.852s 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 ************************************ 00:30:32.506 END TEST fio_dif_1_multi_subsystems 00:30:32.506 ************************************ 00:30:32.506 20:28:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:32.506 20:28:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:32.506 20:28:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.506 20:28:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 ************************************ 00:30:32.506 START TEST fio_dif_rand_params 00:30:32.506 ************************************ 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 bdev_null0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:32.506 [2024-07-15 20:28:57.703288] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:32.506 { 00:30:32.506 "params": { 00:30:32.506 "name": "Nvme$subsystem", 00:30:32.506 "trtype": "$TEST_TRANSPORT", 00:30:32.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:32.506 "adrfam": "ipv4", 00:30:32.506 "trsvcid": "$NVMF_PORT", 00:30:32.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:32.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:32.506 "hdgst": ${hdgst:-false}, 00:30:32.506 "ddgst": ${ddgst:-false} 00:30:32.506 }, 00:30:32.506 "method": "bdev_nvme_attach_controller" 00:30:32.506 } 00:30:32.506 EOF 00:30:32.506 )") 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:32.506 "params": { 00:30:32.506 "name": "Nvme0", 00:30:32.506 "trtype": "tcp", 00:30:32.506 "traddr": "10.0.0.2", 00:30:32.506 "adrfam": "ipv4", 00:30:32.506 "trsvcid": "4420", 00:30:32.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:32.506 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:32.506 "hdgst": false, 00:30:32.506 "ddgst": false 00:30:32.506 }, 00:30:32.506 "method": "bdev_nvme_attach_controller" 00:30:32.506 }' 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:32.506 20:28:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.074 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:33.074 ... 00:30:33.074 fio-3.35 00:30:33.074 Starting 3 threads 00:30:33.074 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.635 00:30:39.635 filename0: (groupid=0, jobs=1): err= 0: pid=236940: Mon Jul 15 20:29:03 2024 00:30:39.635 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5048msec) 00:30:39.635 slat (nsec): min=4452, max=39044, avg=16106.74, stdev=3368.23 00:30:39.635 clat (usec): min=4914, max=57228, avg=13935.67, stdev=7152.54 00:30:39.635 lat (usec): min=4924, max=57240, avg=13951.78, stdev=7152.26 00:30:39.635 clat percentiles (usec): 00:30:39.635 | 1.00th=[ 5932], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[10683], 00:30:39.635 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:30:39.635 | 70.00th=[14222], 80.00th=[14877], 90.00th=[15533], 95.00th=[16319], 00:30:39.635 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54789], 99.95th=[57410], 00:30:39.635 | 99.99th=[57410] 00:30:39.635 bw ( KiB/s): min=17152, max=35584, per=34.54%, avg=27622.40, stdev=4893.78, samples=10 00:30:39.635 iops : min= 134, max= 278, avg=215.80, stdev=38.23, samples=10 00:30:39.635 lat (msec) : 10=15.06%, 20=81.70%, 50=1.02%, 100=2.22% 00:30:39.635 cpu : usr=95.66%, sys=4.00%, ctx=10, majf=0, minf=145 00:30:39.635 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.635 issued rwts: total=1082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:39.635 filename0: (groupid=0, jobs=1): err= 0: pid=236941: Mon Jul 15 20:29:03 2024 00:30:39.635 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(131MiB/5005msec) 00:30:39.635 slat (nsec): min=9808, max=64874, avg=27796.23, stdev=3284.17 00:30:39.635 clat (usec): min=7697, max=53093, avg=14265.43, stdev=6772.96 00:30:39.635 lat (usec): min=7719, max=53122, avg=14293.22, stdev=6772.73 00:30:39.635 clat percentiles (usec): 00:30:39.635 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9372], 20.00th=[11076], 00:30:39.635 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13566], 60.00th=[14091], 00:30:39.635 | 70.00th=[14746], 80.00th=[15401], 90.00th=[16319], 95.00th=[17433], 00:30:39.635 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:30:39.635 | 99.99th=[53216] 00:30:39.635 bw ( KiB/s): min=20224, max=34560, per=33.51%, avg=26803.20, stdev=3745.05, samples=10 00:30:39.635 iops : min= 158, max= 270, avg=209.40, stdev=29.26, samples=10 00:30:39.635 lat (msec) : 10=15.52%, 20=81.62%, 50=0.67%, 100=2.19% 00:30:39.635 cpu : usr=93.49%, sys=5.92%, ctx=6, majf=0, minf=85 00:30:39.635 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.635 issued rwts: total=1050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:39.635 filename0: (groupid=0, jobs=1): err= 0: pid=236942: Mon Jul 15 20:29:03 2024 00:30:39.635 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(128MiB/5005msec) 00:30:39.635 slat (nsec): min=9391, max=52032, avg=15172.15, stdev=3507.90 00:30:39.635 clat (usec): min=4325, max=58772, avg=14670.77, stdev=7876.44 00:30:39.635 lat (usec): min=4343, max=58799, avg=14685.94, stdev=7876.54 00:30:39.635 clat percentiles (usec): 00:30:39.635 | 1.00th=[ 5669], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[10945], 00:30:39.635 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13829], 60.00th=[14746], 00:30:39.635 | 70.00th=[15533], 80.00th=[16188], 90.00th=[17171], 95.00th=[17957], 00:30:39.635 | 99.00th=[52691], 99.50th=[53740], 99.90th=[58983], 99.95th=[58983], 00:30:39.635 | 99.99th=[58983] 00:30:39.635 bw ( KiB/s): min=11520, max=34048, per=32.62%, avg=26086.40, stdev=5971.44, samples=10 00:30:39.635 iops : min= 90, max= 266, avg=203.80, stdev=46.65, samples=10 00:30:39.635 lat (msec) : 10=15.26%, 20=81.12%, 50=0.20%, 100=3.42% 00:30:39.635 cpu : usr=95.60%, sys=4.04%, ctx=11, majf=0, minf=140 00:30:39.635 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.635 issued rwts: total=1022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:39.635 00:30:39.635 Run status group 0 (all jobs): 00:30:39.635 READ: bw=78.1MiB/s (81.9MB/s), 25.5MiB/s-26.8MiB/s (26.8MB/s-28.1MB/s), io=394MiB (413MB), run=5005-5048msec 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.635 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 bdev_null0 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 [2024-07-15 20:29:04.063103] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 bdev_null1 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 bdev_null2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:39.636 { 00:30:39.636 "params": { 00:30:39.636 "name": "Nvme$subsystem", 00:30:39.636 "trtype": "$TEST_TRANSPORT", 00:30:39.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.636 "adrfam": "ipv4", 00:30:39.636 "trsvcid": "$NVMF_PORT", 00:30:39.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.636 "hdgst": ${hdgst:-false}, 00:30:39.636 "ddgst": ${ddgst:-false} 00:30:39.636 }, 00:30:39.636 "method": "bdev_nvme_attach_controller" 00:30:39.636 } 00:30:39.636 EOF 00:30:39.636 )") 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:39.636 { 00:30:39.636 "params": { 00:30:39.636 "name": "Nvme$subsystem", 00:30:39.636 "trtype": "$TEST_TRANSPORT", 00:30:39.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.636 "adrfam": "ipv4", 00:30:39.636 "trsvcid": "$NVMF_PORT", 00:30:39.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.636 "hdgst": ${hdgst:-false}, 00:30:39.636 "ddgst": ${ddgst:-false} 00:30:39.636 }, 00:30:39.636 "method": "bdev_nvme_attach_controller" 00:30:39.636 } 00:30:39.636 EOF 00:30:39.636 )") 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:39.636 { 00:30:39.636 "params": { 00:30:39.636 "name": "Nvme$subsystem", 00:30:39.636 "trtype": "$TEST_TRANSPORT", 00:30:39.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.636 "adrfam": "ipv4", 00:30:39.636 "trsvcid": "$NVMF_PORT", 00:30:39.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.636 "hdgst": ${hdgst:-false}, 00:30:39.636 "ddgst": ${ddgst:-false} 00:30:39.636 }, 00:30:39.636 "method": "bdev_nvme_attach_controller" 00:30:39.636 } 00:30:39.636 EOF 00:30:39.636 )") 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:39.636 20:29:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:39.636 "params": { 00:30:39.636 "name": "Nvme0", 00:30:39.637 "trtype": "tcp", 00:30:39.637 "traddr": "10.0.0.2", 00:30:39.637 "adrfam": "ipv4", 00:30:39.637 "trsvcid": "4420", 00:30:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:39.637 "hdgst": false, 00:30:39.637 "ddgst": false 00:30:39.637 }, 00:30:39.637 "method": "bdev_nvme_attach_controller" 00:30:39.637 },{ 00:30:39.637 "params": { 00:30:39.637 "name": "Nvme1", 00:30:39.637 "trtype": "tcp", 00:30:39.637 "traddr": "10.0.0.2", 00:30:39.637 "adrfam": "ipv4", 00:30:39.637 "trsvcid": "4420", 00:30:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:39.637 "hdgst": false, 00:30:39.637 "ddgst": false 00:30:39.637 }, 00:30:39.637 "method": "bdev_nvme_attach_controller" 00:30:39.637 },{ 00:30:39.637 "params": { 00:30:39.637 "name": "Nvme2", 00:30:39.637 "trtype": "tcp", 00:30:39.637 "traddr": "10.0.0.2", 00:30:39.637 "adrfam": "ipv4", 00:30:39.637 "trsvcid": "4420", 00:30:39.637 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:39.637 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:39.637 "hdgst": false, 00:30:39.637 "ddgst": false 00:30:39.637 }, 00:30:39.637 "method": "bdev_nvme_attach_controller" 00:30:39.637 }' 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:39.637 20:29:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.637 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:39.637 ... 00:30:39.637 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:39.637 ... 00:30:39.637 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:39.637 ... 00:30:39.637 fio-3.35 00:30:39.637 Starting 24 threads 00:30:39.637 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.880 00:30:51.880 filename0: (groupid=0, jobs=1): err= 0: pid=238137: Mon Jul 15 20:29:15 2024 00:30:51.880 read: IOPS=436, BW=1744KiB/s (1786kB/s)(17.1MiB/10018msec) 00:30:51.880 slat (nsec): min=4386, max=65730, avg=12884.07, stdev=3838.35 00:30:51.880 clat (usec): min=1709, max=46411, avg=36582.42, stdev=5927.33 00:30:51.880 lat (usec): min=1719, max=46426, avg=36595.30, stdev=5927.83 00:30:51.880 clat percentiles (usec): 00:30:51.880 | 1.00th=[ 1811], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.880 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.880 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:30:51.880 | 99.00th=[39584], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:30:51.880 | 99.99th=[46400] 00:30:51.881 bw ( KiB/s): min= 1664, max= 2560, per=4.28%, avg=1741.25, stdev=200.71, samples=20 00:30:51.881 iops : min= 416, max= 640, avg=435.20, stdev=50.22, samples=20 00:30:51.881 lat (msec) : 2=1.10%, 4=0.73%, 10=1.10%, 20=0.37%, 50=96.70% 00:30:51.881 cpu : usr=98.60%, sys=0.99%, ctx=15, majf=0, minf=84 00:30:51.881 IO depths : 1=6.0%, 2=12.1%, 4=24.2%, 8=51.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238138: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=427, BW=1710KiB/s (1751kB/s)(16.8MiB/10032msec) 00:30:51.881 slat (usec): min=9, max=110, avg=40.62, stdev=15.38 00:30:51.881 clat (usec): min=6703, max=47186, avg=37086.05, stdev=3383.09 00:30:51.881 lat (usec): min=6724, max=47208, avg=37126.67, stdev=3384.60 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[ 8586], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:30:51.881 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.881 | 99.00th=[40109], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:30:51.881 | 99.99th=[47449] 00:30:51.881 bw ( KiB/s): min= 1664, max= 1923, per=4.20%, avg=1710.15, stdev=75.40, samples=20 00:30:51.881 iops : min= 416, max= 480, avg=427.20, stdev=18.79, samples=20 00:30:51.881 lat (msec) : 10=1.12%, 20=0.05%, 50=98.83% 00:30:51.881 cpu : usr=98.25%, sys=1.26%, ctx=50, majf=0, minf=53 00:30:51.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238139: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=432, BW=1732KiB/s (1773kB/s)(16.9MiB/10002msec) 00:30:51.881 slat (usec): min=8, max=121, avg=46.71, stdev=29.49 00:30:51.881 clat (usec): min=17662, max=86354, avg=36509.14, stdev=4494.37 00:30:51.881 lat (usec): min=17672, max=86371, avg=36555.85, stdev=4500.22 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[21890], 5.00th=[25822], 10.00th=[35390], 20.00th=[36963], 00:30:51.881 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.881 | 99.00th=[44827], 99.50th=[58459], 99.90th=[72877], 99.95th=[72877], 00:30:51.881 | 99.99th=[86508] 00:30:51.881 bw ( KiB/s): min= 1664, max= 2016, per=4.25%, avg=1728.84, stdev=95.33, samples=19 00:30:51.881 iops : min= 416, max= 504, avg=432.21, stdev=23.83, samples=19 00:30:51.881 lat (msec) : 20=0.37%, 50=98.94%, 100=0.69% 00:30:51.881 cpu : usr=98.64%, sys=0.94%, ctx=13, majf=0, minf=56 00:30:51.881 IO depths : 1=4.6%, 2=9.4%, 4=19.7%, 8=57.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238140: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10014msec) 00:30:51.881 slat (usec): min=8, max=253, avg=43.98, stdev=20.09 00:30:51.881 clat (usec): min=20848, max=63245, avg=37066.81, stdev=2639.08 00:30:51.881 lat (usec): min=20875, max=63259, avg=37110.79, stdev=2642.89 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[23200], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:30:51.881 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.881 | 99.00th=[43254], 99.50th=[50594], 99.90th=[58983], 99.95th=[58983], 00:30:51.881 | 99.99th=[63177] 00:30:51.881 bw ( KiB/s): min= 1664, max= 1856, per=4.19%, avg=1705.26, stdev=65.17, samples=19 00:30:51.881 iops : min= 416, max= 464, avg=426.32, stdev=16.29, samples=19 00:30:51.881 lat (msec) : 50=99.44%, 100=0.56% 00:30:51.881 cpu : usr=98.24%, sys=1.20%, ctx=19, majf=0, minf=46 00:30:51.881 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238141: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10004msec) 00:30:51.881 slat (nsec): min=6585, max=51882, avg=26833.39, stdev=7662.10 00:30:51.881 clat (usec): min=18890, max=75097, avg=37639.75, stdev=2625.58 00:30:51.881 lat (usec): min=18913, max=75115, avg=37666.59, stdev=2624.77 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.881 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.881 | 99.00th=[41681], 99.50th=[42206], 99.90th=[74974], 99.95th=[74974], 00:30:51.881 | 99.99th=[74974] 00:30:51.881 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1684.21, stdev=64.19, samples=19 00:30:51.881 iops : min= 384, max= 448, avg=421.05, stdev=16.05, samples=19 00:30:51.881 lat (msec) : 20=0.33%, 50=99.29%, 100=0.38% 00:30:51.881 cpu : usr=98.59%, sys=0.99%, ctx=16, majf=0, minf=36 00:30:51.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238142: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10008msec) 00:30:51.881 slat (nsec): min=5509, max=53936, avg=25955.69, stdev=7808.18 00:30:51.881 clat (usec): min=18922, max=87865, avg=37688.87, stdev=3065.42 00:30:51.881 lat (usec): min=18939, max=87880, avg=37714.83, stdev=3064.42 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.881 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.881 | 99.00th=[41681], 99.50th=[42206], 99.90th=[82314], 99.95th=[82314], 00:30:51.881 | 99.99th=[87557] 00:30:51.881 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=88.10, samples=19 00:30:51.881 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:30:51.881 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:30:51.881 cpu : usr=98.69%, sys=0.89%, ctx=18, majf=0, minf=32 00:30:51.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238143: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.8MiB/10034msec) 00:30:51.881 slat (usec): min=6, max=120, avg=19.28, stdev=16.27 00:30:51.881 clat (usec): min=6709, max=47883, avg=37292.06, stdev=3364.73 00:30:51.881 lat (usec): min=6725, max=47949, avg=37311.34, stdev=3364.67 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[ 8717], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:30:51.881 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:30:51.881 | 99.00th=[40633], 99.50th=[43779], 99.90th=[45876], 99.95th=[46400], 00:30:51.881 | 99.99th=[47973] 00:30:51.881 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1710.00, stdev=74.95, samples=20 00:30:51.881 iops : min= 416, max= 480, avg=427.20, stdev=18.79, samples=20 00:30:51.881 lat (msec) : 10=1.12%, 20=0.05%, 50=98.83% 00:30:51.881 cpu : usr=98.74%, sys=0.86%, ctx=13, majf=0, minf=48 00:30:51.881 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename0: (groupid=0, jobs=1): err= 0: pid=238144: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.881 slat (usec): min=10, max=122, avg=55.47, stdev=25.08 00:30:51.881 clat (usec): min=20680, max=50859, avg=37263.34, stdev=1446.43 00:30:51.881 lat (usec): min=20703, max=50882, avg=37318.81, stdev=1446.14 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 00:30:51.881 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.881 | 99.00th=[39584], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:30:51.881 | 99.99th=[51119] 00:30:51.881 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.881 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.881 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.881 cpu : usr=98.35%, sys=1.23%, ctx=29, majf=0, minf=52 00:30:51.881 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.881 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.881 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.881 filename1: (groupid=0, jobs=1): err= 0: pid=238145: Mon Jul 15 20:29:15 2024 00:30:51.881 read: IOPS=423, BW=1693KiB/s (1734kB/s)(16.6MiB/10015msec) 00:30:51.881 slat (usec): min=9, max=121, avg=54.35, stdev=26.26 00:30:51.881 clat (usec): min=20743, max=50582, avg=37344.01, stdev=1465.68 00:30:51.881 lat (usec): min=20782, max=50623, avg=37398.35, stdev=1461.86 00:30:51.881 clat percentiles (usec): 00:30:51.881 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:30:51.881 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:30:51.881 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.881 | 99.00th=[40109], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:30:51.881 | 99.99th=[50594] 00:30:51.881 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.881 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.881 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.882 cpu : usr=98.64%, sys=0.93%, ctx=17, majf=0, minf=44 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238146: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.882 slat (usec): min=10, max=116, avg=54.95, stdev=25.04 00:30:51.882 clat (usec): min=20739, max=50908, avg=37258.17, stdev=1444.45 00:30:51.882 lat (usec): min=20757, max=50926, avg=37313.12, stdev=1444.59 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 00:30:51.882 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.882 | 99.00th=[39584], 99.50th=[43254], 99.90th=[50594], 99.95th=[51119], 00:30:51.882 | 99.99th=[51119] 00:30:51.882 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.882 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.882 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.882 cpu : usr=98.72%, sys=0.86%, ctx=12, majf=0, minf=55 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238147: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=425, BW=1702KiB/s (1743kB/s)(16.6MiB/10003msec) 00:30:51.882 slat (usec): min=5, max=121, avg=20.77, stdev=15.98 00:30:51.882 clat (msec): min=5, max=114, avg=37.52, stdev= 5.34 00:30:51.882 lat (msec): min=5, max=114, avg=37.54, stdev= 5.34 00:30:51.882 clat percentiles (msec): 00:30:51.882 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 38], 00:30:51.882 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 38], 00:30:51.882 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 39], 95.00th=[ 40], 00:30:51.882 | 99.00th=[ 54], 99.50th=[ 62], 99.90th=[ 92], 99.95th=[ 92], 00:30:51.882 | 99.99th=[ 115] 00:30:51.882 bw ( KiB/s): min= 1408, max= 1776, per=4.16%, avg=1693.47, stdev=85.88, samples=19 00:30:51.882 iops : min= 352, max= 444, avg=423.37, stdev=21.47, samples=19 00:30:51.882 lat (msec) : 10=0.14%, 20=0.82%, 50=97.70%, 100=1.29%, 250=0.05% 00:30:51.882 cpu : usr=98.60%, sys=0.98%, ctx=10, majf=0, minf=49 00:30:51.882 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=75.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=90.5%, 8=7.8%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238148: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.882 slat (usec): min=9, max=121, avg=56.10, stdev=24.82 00:30:51.882 clat (usec): min=20714, max=50756, avg=37278.83, stdev=1442.53 00:30:51.882 lat (usec): min=20737, max=50777, avg=37334.93, stdev=1441.10 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 00:30:51.882 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.882 | 99.00th=[40109], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:30:51.882 | 99.99th=[50594] 00:30:51.882 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.882 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.882 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.882 cpu : usr=98.79%, sys=0.79%, ctx=14, majf=0, minf=53 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238149: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=427, BW=1709KiB/s (1750kB/s)(16.8MiB/10034msec) 00:30:51.882 slat (usec): min=7, max=121, avg=49.39, stdev=28.08 00:30:51.882 clat (usec): min=6692, max=46185, avg=37045.03, stdev=3347.41 00:30:51.882 lat (usec): min=6708, max=46240, avg=37094.42, stdev=3349.10 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[ 8717], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:30:51.882 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.882 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:30:51.882 | 99.99th=[46400] 00:30:51.882 bw ( KiB/s): min= 1664, max= 1920, per=4.20%, avg=1710.00, stdev=74.95, samples=20 00:30:51.882 iops : min= 416, max= 480, avg=427.20, stdev=18.79, samples=20 00:30:51.882 lat (msec) : 10=1.12%, 50=98.88% 00:30:51.882 cpu : usr=98.73%, sys=0.85%, ctx=11, majf=0, minf=41 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238150: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.882 slat (usec): min=9, max=121, avg=54.68, stdev=25.33 00:30:51.882 clat (usec): min=20719, max=50788, avg=37260.67, stdev=1472.91 00:30:51.882 lat (usec): min=20746, max=50810, avg=37315.35, stdev=1473.11 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:30:51.882 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.882 | 99.00th=[40109], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:30:51.882 | 99.99th=[50594] 00:30:51.882 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.882 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.882 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.882 cpu : usr=98.51%, sys=1.06%, ctx=20, majf=0, minf=46 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238151: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10003msec) 00:30:51.882 slat (nsec): min=5805, max=49814, avg=26612.01, stdev=7784.77 00:30:51.882 clat (usec): min=18885, max=77176, avg=37645.05, stdev=2744.07 00:30:51.882 lat (usec): min=18903, max=77192, avg=37671.66, stdev=2743.12 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.882 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.882 | 99.00th=[41681], 99.50th=[42206], 99.90th=[77071], 99.95th=[77071], 00:30:51.882 | 99.99th=[77071] 00:30:51.882 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1684.21, stdev=64.19, samples=19 00:30:51.882 iops : min= 384, max= 448, avg=421.05, stdev=16.05, samples=19 00:30:51.882 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:30:51.882 cpu : usr=98.60%, sys=0.96%, ctx=14, majf=0, minf=36 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename1: (groupid=0, jobs=1): err= 0: pid=238152: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10008msec) 00:30:51.882 slat (nsec): min=5639, max=51979, avg=26469.04, stdev=7849.37 00:30:51.882 clat (usec): min=18852, max=82347, avg=37684.40, stdev=3030.06 00:30:51.882 lat (usec): min=18871, max=82362, avg=37710.87, stdev=3028.92 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.882 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.882 | 99.00th=[41681], 99.50th=[42206], 99.90th=[82314], 99.95th=[82314], 00:30:51.882 | 99.99th=[82314] 00:30:51.882 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=88.10, samples=19 00:30:51.882 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:30:51.882 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:30:51.882 cpu : usr=98.69%, sys=0.89%, ctx=11, majf=0, minf=59 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.882 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.882 filename2: (groupid=0, jobs=1): err= 0: pid=238153: Mon Jul 15 20:29:15 2024 00:30:51.882 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.882 slat (usec): min=10, max=119, avg=55.98, stdev=25.01 00:30:51.882 clat (usec): min=20852, max=50632, avg=37298.14, stdev=1434.75 00:30:51.882 lat (usec): min=20880, max=50652, avg=37354.13, stdev=1432.61 00:30:51.882 clat percentiles (usec): 00:30:51.882 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 00:30:51.882 | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487], 00:30:51.882 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.882 | 99.00th=[39584], 99.50th=[43779], 99.90th=[50594], 99.95th=[50594], 00:30:51.882 | 99.99th=[50594] 00:30:51.882 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.882 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.882 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.882 cpu : usr=98.79%, sys=0.79%, ctx=12, majf=0, minf=40 00:30:51.882 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.882 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238154: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=421, BW=1684KiB/s (1725kB/s)(16.5MiB/10002msec) 00:30:51.883 slat (usec): min=7, max=117, avg=32.21, stdev=26.30 00:30:51.883 clat (msec): min=9, max=110, avg=37.77, stdev= 5.44 00:30:51.883 lat (msec): min=9, max=110, avg=37.80, stdev= 5.44 00:30:51.883 clat percentiles (msec): 00:30:51.883 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 37], 00:30:51.883 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 38], 00:30:51.883 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 39], 95.00th=[ 44], 00:30:51.883 | 99.00th=[ 59], 99.50th=[ 62], 99.90th=[ 92], 99.95th=[ 92], 00:30:51.883 | 99.99th=[ 111] 00:30:51.883 bw ( KiB/s): min= 1408, max= 1776, per=4.13%, avg=1680.84, stdev=79.55, samples=19 00:30:51.883 iops : min= 352, max= 444, avg=420.21, stdev=19.89, samples=19 00:30:51.883 lat (msec) : 10=0.05%, 20=0.57%, 50=97.41%, 100=1.92%, 250=0.05% 00:30:51.883 cpu : usr=98.53%, sys=1.04%, ctx=13, majf=0, minf=70 00:30:51.883 IO depths : 1=0.3%, 2=3.6%, 4=14.1%, 8=67.7%, 16=14.4%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=92.0%, 8=4.5%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238155: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.7MiB/10003msec) 00:30:51.883 slat (usec): min=8, max=116, avg=16.11, stdev=10.77 00:30:51.883 clat (usec): min=9324, max=92117, avg=37362.33, stdev=5412.11 00:30:51.883 lat (usec): min=9333, max=92133, avg=37378.44, stdev=5412.30 00:30:51.883 clat percentiles (usec): 00:30:51.883 | 1.00th=[21627], 5.00th=[30540], 10.00th=[36439], 20.00th=[37487], 00:30:51.883 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.883 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[39060], 00:30:51.883 | 99.00th=[58983], 99.50th=[63701], 99.90th=[91751], 99.95th=[91751], 00:30:51.883 | 99.99th=[91751] 00:30:51.883 bw ( KiB/s): min= 1376, max= 1936, per=4.18%, avg=1702.74, stdev=107.77, samples=19 00:30:51.883 iops : min= 344, max= 484, avg=425.68, stdev=26.94, samples=19 00:30:51.883 lat (msec) : 10=0.37%, 20=0.42%, 50=97.89%, 100=1.31% 00:30:51.883 cpu : usr=98.57%, sys=1.00%, ctx=16, majf=0, minf=42 00:30:51.883 IO depths : 1=4.7%, 2=9.6%, 4=20.3%, 8=57.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238156: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=420, BW=1683KiB/s (1723kB/s)(16.4MiB/10002msec) 00:30:51.883 slat (nsec): min=9760, max=50603, avg=17652.48, stdev=7646.24 00:30:51.883 clat (usec): min=36583, max=95668, avg=37890.90, stdev=3607.43 00:30:51.883 lat (usec): min=36614, max=95693, avg=37908.55, stdev=3606.92 00:30:51.883 clat percentiles (usec): 00:30:51.883 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:30:51.883 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.883 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:30:51.883 | 99.00th=[41681], 99.50th=[42206], 99.90th=[95945], 99.95th=[95945], 00:30:51.883 | 99.99th=[95945] 00:30:51.883 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1684.21, stdev=88.10, samples=19 00:30:51.883 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:30:51.883 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.883 cpu : usr=98.00%, sys=1.53%, ctx=21, majf=0, minf=100 00:30:51.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238157: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.883 slat (usec): min=12, max=118, avg=55.80, stdev=24.80 00:30:51.883 clat (usec): min=19422, max=52733, avg=37267.59, stdev=1456.06 00:30:51.883 lat (usec): min=19434, max=52758, avg=37323.39, stdev=1455.49 00:30:51.883 clat percentiles (usec): 00:30:51.883 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36439], 20.00th=[36963], 00:30:51.883 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:30:51.883 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.883 | 99.00th=[40109], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:30:51.883 | 99.99th=[52691] 00:30:51.883 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.883 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.883 lat (msec) : 20=0.05%, 50=99.58%, 100=0.38% 00:30:51.883 cpu : usr=98.72%, sys=0.85%, ctx=12, majf=0, minf=51 00:30:51.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238158: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10007msec) 00:30:51.883 slat (nsec): min=5627, max=54974, avg=27113.96, stdev=7921.47 00:30:51.883 clat (usec): min=18935, max=81208, avg=37667.59, stdev=2965.40 00:30:51.883 lat (usec): min=18954, max=81224, avg=37694.70, stdev=2964.32 00:30:51.883 clat percentiles (usec): 00:30:51.883 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.883 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.883 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.883 | 99.00th=[41681], 99.50th=[42206], 99.90th=[81265], 99.95th=[81265], 00:30:51.883 | 99.99th=[81265] 00:30:51.883 bw ( KiB/s): min= 1410, max= 1792, per=4.14%, avg=1684.32, stdev=87.75, samples=19 00:30:51.883 iops : min= 352, max= 448, avg=421.05, stdev=22.02, samples=19 00:30:51.883 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:30:51.883 cpu : usr=98.66%, sys=0.92%, ctx=13, majf=0, minf=48 00:30:51.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238159: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10001msec) 00:30:51.883 slat (nsec): min=6453, max=58588, avg=25611.70, stdev=8567.39 00:30:51.883 clat (usec): min=18852, max=87561, avg=37641.04, stdev=2756.41 00:30:51.883 lat (usec): min=18862, max=87579, avg=37666.65, stdev=2755.72 00:30:51.883 clat percentiles (usec): 00:30:51.883 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:30:51.883 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:30:51.883 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536], 00:30:51.883 | 99.00th=[42206], 99.50th=[42206], 99.90th=[74974], 99.95th=[74974], 00:30:51.883 | 99.99th=[87557] 00:30:51.883 bw ( KiB/s): min= 1520, max= 1792, per=4.14%, avg=1684.21, stdev=66.15, samples=19 00:30:51.883 iops : min= 380, max= 448, avg=421.05, stdev=16.54, samples=19 00:30:51.883 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:30:51.883 cpu : usr=98.65%, sys=0.93%, ctx=10, majf=0, minf=40 00:30:51.883 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 filename2: (groupid=0, jobs=1): err= 0: pid=238160: Mon Jul 15 20:29:15 2024 00:30:51.883 read: IOPS=423, BW=1694KiB/s (1734kB/s)(16.6MiB/10014msec) 00:30:51.883 slat (usec): min=9, max=116, avg=52.95, stdev=25.86 00:30:51.883 clat (usec): min=20772, max=50952, avg=37256.37, stdev=1478.42 00:30:51.883 lat (usec): min=20791, max=50967, avg=37309.32, stdev=1479.39 00:30:51.883 clat percentiles (usec): 00:30:51.883 | 1.00th=[36439], 5.00th=[36439], 10.00th=[36963], 20.00th=[36963], 00:30:51.883 | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487], 00:30:51.883 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:30:51.883 | 99.00th=[40109], 99.50th=[43779], 99.90th=[51119], 99.95th=[51119], 00:30:51.883 | 99.99th=[51119] 00:30:51.883 bw ( KiB/s): min= 1664, max= 1792, per=4.15%, avg=1690.95, stdev=53.61, samples=19 00:30:51.883 iops : min= 416, max= 448, avg=422.74, stdev=13.40, samples=19 00:30:51.883 lat (msec) : 50=99.62%, 100=0.38% 00:30:51.883 cpu : usr=98.75%, sys=0.82%, ctx=11, majf=0, minf=50 00:30:51.883 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:51.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:51.883 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:51.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:51.883 00:30:51.883 Run status group 0 (all jobs): 00:30:51.883 READ: bw=39.7MiB/s (41.7MB/s), 1683KiB/s-1744KiB/s (1723kB/s-1786kB/s), io=399MiB (418MB), run=10001-10034msec 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:51.883 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 bdev_null0 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 [2024-07-15 20:29:15.718217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 bdev_null1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.884 { 00:30:51.884 "params": { 00:30:51.884 "name": "Nvme$subsystem", 00:30:51.884 "trtype": "$TEST_TRANSPORT", 00:30:51.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.884 "adrfam": "ipv4", 00:30:51.884 "trsvcid": "$NVMF_PORT", 00:30:51.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.884 "hdgst": ${hdgst:-false}, 00:30:51.884 "ddgst": ${ddgst:-false} 00:30:51.884 }, 00:30:51.884 "method": "bdev_nvme_attach_controller" 00:30:51.884 } 00:30:51.884 EOF 00:30:51.884 )") 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.884 { 00:30:51.884 "params": { 00:30:51.884 "name": "Nvme$subsystem", 00:30:51.884 "trtype": "$TEST_TRANSPORT", 00:30:51.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.884 "adrfam": "ipv4", 00:30:51.884 "trsvcid": "$NVMF_PORT", 00:30:51.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.884 "hdgst": ${hdgst:-false}, 00:30:51.884 "ddgst": ${ddgst:-false} 00:30:51.884 }, 00:30:51.884 "method": "bdev_nvme_attach_controller" 00:30:51.884 } 00:30:51.884 EOF 00:30:51.884 )") 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:51.884 20:29:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:51.884 "params": { 00:30:51.884 "name": "Nvme0", 00:30:51.884 "trtype": "tcp", 00:30:51.884 "traddr": "10.0.0.2", 00:30:51.884 "adrfam": "ipv4", 00:30:51.884 "trsvcid": "4420", 00:30:51.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.884 "hdgst": false, 00:30:51.884 "ddgst": false 00:30:51.884 }, 00:30:51.884 "method": "bdev_nvme_attach_controller" 00:30:51.884 },{ 00:30:51.884 "params": { 00:30:51.884 "name": "Nvme1", 00:30:51.885 "trtype": "tcp", 00:30:51.885 "traddr": "10.0.0.2", 00:30:51.885 "adrfam": "ipv4", 00:30:51.885 "trsvcid": "4420", 00:30:51.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.885 "hdgst": false, 00:30:51.885 "ddgst": false 00:30:51.885 }, 00:30:51.885 "method": "bdev_nvme_attach_controller" 00:30:51.885 }' 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:51.885 20:29:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.885 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:51.885 ... 00:30:51.885 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:51.885 ... 00:30:51.885 fio-3.35 00:30:51.885 Starting 4 threads 00:30:51.885 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.257 00:30:57.257 filename0: (groupid=0, jobs=1): err= 0: pid=240369: Mon Jul 15 20:29:21 2024 00:30:57.257 read: IOPS=1782, BW=13.9MiB/s (14.6MB/s)(69.6MiB/5002msec) 00:30:57.257 slat (nsec): min=9484, max=72984, avg=27410.25, stdev=13510.90 00:30:57.257 clat (usec): min=1264, max=46725, avg=4400.61, stdev=1378.31 00:30:57.257 lat (usec): min=1295, max=46752, avg=4428.02, stdev=1378.92 00:30:57.257 clat percentiles (usec): 00:30:57.257 | 1.00th=[ 2802], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3982], 00:30:57.257 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4555], 00:30:57.257 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:30:57.257 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 7504], 99.95th=[46924], 00:30:57.257 | 99.99th=[46924] 00:30:57.257 bw ( KiB/s): min=12544, max=15584, per=25.60%, avg=14227.56, stdev=932.57, samples=9 00:30:57.257 iops : min= 1568, max= 1948, avg=1778.44, stdev=116.57, samples=9 00:30:57.257 lat (msec) : 2=0.15%, 4=20.13%, 10=79.64%, 50=0.09% 00:30:57.257 cpu : usr=97.34%, sys=2.18%, ctx=19, majf=0, minf=9 00:30:57.257 IO depths : 1=0.4%, 2=11.5%, 4=60.1%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 issued rwts: total=8914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:57.257 filename0: (groupid=0, jobs=1): err= 0: pid=240370: Mon Jul 15 20:29:21 2024 00:30:57.257 read: IOPS=1741, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5003msec) 00:30:57.257 slat (nsec): min=6769, max=81116, avg=25208.86, stdev=16293.21 00:30:57.257 clat (usec): min=1142, max=8042, avg=4505.25, stdev=580.49 00:30:57.257 lat (usec): min=1158, max=8090, avg=4530.46, stdev=581.29 00:30:57.257 clat percentiles (usec): 00:30:57.257 | 1.00th=[ 2900], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4228], 00:30:57.257 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:30:57.257 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5407], 00:30:57.257 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 7898], 00:30:57.257 | 99.99th=[ 8029] 00:30:57.257 bw ( KiB/s): min=13744, max=14288, per=25.10%, avg=13947.89, stdev=171.10, samples=9 00:30:57.257 iops : min= 1718, max= 1786, avg=1743.44, stdev=21.38, samples=9 00:30:57.257 lat (msec) : 2=0.21%, 4=11.21%, 10=88.58% 00:30:57.257 cpu : usr=96.72%, sys=2.78%, ctx=11, majf=0, minf=9 00:30:57.257 IO depths : 1=0.1%, 2=12.8%, 4=60.2%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 issued rwts: total=8712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:57.257 filename1: (groupid=0, jobs=1): err= 0: pid=240371: Mon Jul 15 20:29:21 2024 00:30:57.257 read: IOPS=1698, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5002msec) 00:30:57.257 slat (nsec): min=6946, max=81174, avg=24551.20, stdev=16288.96 00:30:57.257 clat (usec): min=883, max=8253, avg=4626.12, stdev=652.28 00:30:57.257 lat (usec): min=896, max=8289, avg=4650.67, stdev=650.84 00:30:57.257 clat percentiles (usec): 00:30:57.257 | 1.00th=[ 3097], 5.00th=[ 3949], 10.00th=[ 4146], 20.00th=[ 4359], 00:30:57.257 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:30:57.257 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 6194], 00:30:57.257 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 7963], 00:30:57.257 | 99.99th=[ 8225] 00:30:57.257 bw ( KiB/s): min=13248, max=13888, per=24.45%, avg=13585.78, stdev=268.14, samples=9 00:30:57.257 iops : min= 1656, max= 1736, avg=1698.22, stdev=33.52, samples=9 00:30:57.257 lat (usec) : 1000=0.04% 00:30:57.257 lat (msec) : 2=0.29%, 4=5.37%, 10=94.30% 00:30:57.257 cpu : usr=97.42%, sys=2.18%, ctx=14, majf=0, minf=9 00:30:57.257 IO depths : 1=0.1%, 2=10.7%, 4=62.7%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 issued rwts: total=8496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:57.257 filename1: (groupid=0, jobs=1): err= 0: pid=240372: Mon Jul 15 20:29:21 2024 00:30:57.257 read: IOPS=1724, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5002msec) 00:30:57.257 slat (nsec): min=9138, max=81186, avg=22173.45, stdev=14904.48 00:30:57.257 clat (usec): min=726, max=8188, avg=4569.52, stdev=647.83 00:30:57.257 lat (usec): min=738, max=8205, avg=4591.70, stdev=647.82 00:30:57.257 clat percentiles (usec): 00:30:57.257 | 1.00th=[ 3130], 5.00th=[ 3687], 10.00th=[ 3982], 20.00th=[ 4228], 00:30:57.257 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:30:57.257 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5932], 00:30:57.257 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 7898], 99.95th=[ 8094], 00:30:57.257 | 99.99th=[ 8160] 00:30:57.257 bw ( KiB/s): min=13504, max=14208, per=24.78%, avg=13767.11, stdev=230.35, samples=9 00:30:57.257 iops : min= 1688, max= 1776, avg=1720.89, stdev=28.79, samples=9 00:30:57.257 lat (usec) : 750=0.02% 00:30:57.257 lat (msec) : 2=0.15%, 4=10.56%, 10=89.27% 00:30:57.257 cpu : usr=97.08%, sys=2.48%, ctx=8, majf=0, minf=9 00:30:57.257 IO depths : 1=0.4%, 2=6.8%, 4=66.2%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:57.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:57.257 issued rwts: total=8628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:57.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:57.257 00:30:57.257 Run status group 0 (all jobs): 00:30:57.257 READ: bw=54.3MiB/s (56.9MB/s), 13.3MiB/s-13.9MiB/s (13.9MB/s-14.6MB/s), io=271MiB (285MB), run=5002-5003msec 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.257 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 00:30:57.258 real 0m24.500s 00:30:57.258 user 5m7.486s 00:30:57.258 sys 0m4.646s 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 ************************************ 00:30:57.258 END TEST fio_dif_rand_params 00:30:57.258 ************************************ 00:30:57.258 20:29:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:57.258 20:29:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:57.258 20:29:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:57.258 20:29:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 ************************************ 00:30:57.258 START TEST fio_dif_digest 00:30:57.258 ************************************ 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 bdev_null0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.258 [2024-07-15 20:29:22.282228] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:57.258 { 00:30:57.258 "params": { 00:30:57.258 "name": "Nvme$subsystem", 00:30:57.258 "trtype": "$TEST_TRANSPORT", 00:30:57.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.258 "adrfam": "ipv4", 00:30:57.258 "trsvcid": "$NVMF_PORT", 00:30:57.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.258 "hdgst": ${hdgst:-false}, 00:30:57.258 "ddgst": ${ddgst:-false} 00:30:57.258 }, 00:30:57.258 "method": "bdev_nvme_attach_controller" 00:30:57.258 } 00:30:57.258 EOF 00:30:57.258 )") 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:57.258 "params": { 00:30:57.258 "name": "Nvme0", 00:30:57.258 "trtype": "tcp", 00:30:57.258 "traddr": "10.0.0.2", 00:30:57.258 "adrfam": "ipv4", 00:30:57.258 "trsvcid": "4420", 00:30:57.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.258 "hdgst": true, 00:30:57.258 "ddgst": true 00:30:57.258 }, 00:30:57.258 "method": "bdev_nvme_attach_controller" 00:30:57.258 }' 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:57.258 20:29:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.517 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:57.517 ... 00:30:57.517 fio-3.35 00:30:57.517 Starting 3 threads 00:30:57.517 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.749 00:31:09.749 filename0: (groupid=0, jobs=1): err= 0: pid=241582: Mon Jul 15 20:29:33 2024 00:31:09.749 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10048msec) 00:31:09.749 slat (nsec): min=9829, max=70245, avg=29133.57, stdev=3013.74 00:31:09.749 clat (usec): min=9419, max=57794, avg=15307.08, stdev=3317.62 00:31:09.749 lat (usec): min=9448, max=57822, avg=15336.22, stdev=3317.54 00:31:09.749 clat percentiles (usec): 00:31:09.749 | 1.00th=[10028], 5.00th=[12911], 10.00th=[13829], 20.00th=[14353], 00:31:09.749 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:31:09.749 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:31:09.749 | 99.00th=[18744], 99.50th=[54264], 99.90th=[57410], 99.95th=[57934], 00:31:09.749 | 99.99th=[57934] 00:31:09.749 bw ( KiB/s): min=22528, max=26624, per=35.20%, avg=25088.00, stdev=1030.71, samples=20 00:31:09.749 iops : min= 176, max= 208, avg=196.00, stdev= 8.05, samples=20 00:31:09.749 lat (msec) : 10=0.87%, 20=98.52%, 50=0.10%, 100=0.51% 00:31:09.749 cpu : usr=93.20%, sys=6.23%, ctx=17, majf=0, minf=132 00:31:09.749 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.749 issued rwts: total=1962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:09.749 filename0: (groupid=0, jobs=1): err= 0: pid=241583: Mon Jul 15 20:29:33 2024 00:31:09.749 read: IOPS=179, BW=22.5MiB/s (23.6MB/s)(226MiB/10047msec) 00:31:09.749 slat (nsec): min=9729, max=33339, avg=16484.70, stdev=3108.52 00:31:09.749 clat (usec): min=9020, max=58884, avg=16636.37, stdev=4733.34 00:31:09.749 lat (usec): min=9034, max=58904, avg=16652.86, stdev=4733.35 00:31:09.749 clat percentiles (usec): 00:31:09.749 | 1.00th=[10945], 5.00th=[13960], 10.00th=[14615], 20.00th=[15270], 00:31:09.749 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:31:09.749 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:31:09.749 | 99.00th=[55837], 99.50th=[56361], 99.90th=[58459], 99.95th=[58983], 00:31:09.749 | 99.99th=[58983] 00:31:09.749 bw ( KiB/s): min=21248, max=25088, per=32.40%, avg=23091.20, stdev=1065.60, samples=20 00:31:09.749 iops : min= 166, max= 196, avg=180.40, stdev= 8.32, samples=20 00:31:09.749 lat (msec) : 10=0.39%, 20=98.23%, 50=0.11%, 100=1.27% 00:31:09.749 cpu : usr=95.58%, sys=4.06%, ctx=22, majf=0, minf=48 00:31:09.749 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.749 issued rwts: total=1807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:09.749 filename0: (groupid=0, jobs=1): err= 0: pid=241584: Mon Jul 15 20:29:33 2024 00:31:09.749 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(228MiB/10048msec) 00:31:09.749 slat (nsec): min=9688, max=35665, avg=16858.91, stdev=3025.80 00:31:09.749 clat (usec): min=9624, max=57414, avg=16464.34, stdev=2558.30 00:31:09.749 lat (usec): min=9635, max=57433, avg=16481.20, stdev=2558.35 00:31:09.749 clat percentiles (usec): 00:31:09.749 | 1.00th=[10683], 5.00th=[12911], 10.00th=[14877], 20.00th=[15664], 00:31:09.749 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[16909], 00:31:09.749 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18744], 00:31:09.749 | 99.00th=[19792], 99.50th=[20841], 99.90th=[57410], 99.95th=[57410], 00:31:09.749 | 99.99th=[57410] 00:31:09.749 bw ( KiB/s): min=21803, max=24576, per=32.74%, avg=23336.55, stdev=696.17, samples=20 00:31:09.749 iops : min= 170, max= 192, avg=182.30, stdev= 5.48, samples=20 00:31:09.749 lat (msec) : 10=0.16%, 20=99.01%, 50=0.60%, 100=0.22% 00:31:09.749 cpu : usr=95.41%, sys=4.23%, ctx=20, majf=0, minf=148 00:31:09.749 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.749 issued rwts: total=1826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:09.749 00:31:09.749 Run status group 0 (all jobs): 00:31:09.749 READ: bw=69.6MiB/s (73.0MB/s), 22.5MiB/s-24.4MiB/s (23.6MB/s-25.6MB/s), io=699MiB (733MB), run=10047-10048msec 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.749 00:31:09.749 real 0m11.234s 00:31:09.749 user 0m40.715s 00:31:09.749 sys 0m1.796s 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:09.749 20:29:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.749 ************************************ 00:31:09.749 END TEST fio_dif_digest 00:31:09.749 ************************************ 00:31:09.749 20:29:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:09.749 20:29:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:09.749 20:29:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:09.749 rmmod nvme_tcp 00:31:09.749 rmmod nvme_fabrics 00:31:09.749 rmmod nvme_keyring 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 232153 ']' 00:31:09.749 20:29:33 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 232153 00:31:09.749 20:29:33 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 232153 ']' 00:31:09.749 20:29:33 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 232153 00:31:09.749 20:29:33 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 232153 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 232153' 00:31:09.750 killing process with pid 232153 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@967 -- # kill 232153 00:31:09.750 20:29:33 nvmf_dif -- common/autotest_common.sh@972 -- # wait 232153 00:31:09.750 20:29:33 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:09.750 20:29:33 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:11.128 Waiting for block devices as requested 00:31:11.387 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:31:11.387 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:11.647 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:11.647 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:11.647 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:11.647 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:11.906 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:11.906 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:11.906 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:12.165 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:12.165 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:12.165 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:12.424 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:12.424 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:12.424 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:12.424 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:12.683 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:12.683 20:29:37 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:12.683 20:29:37 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:12.683 20:29:37 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.683 20:29:37 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:12.683 20:29:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.683 20:29:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:12.683 20:29:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.645 20:29:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:14.645 00:31:14.645 real 1m14.533s 00:31:14.645 user 7m41.283s 00:31:14.645 sys 0m19.034s 00:31:14.645 20:29:39 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:14.645 20:29:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.645 ************************************ 00:31:14.645 END TEST nvmf_dif 00:31:14.645 ************************************ 00:31:14.902 20:29:40 -- common/autotest_common.sh@1142 -- # return 0 00:31:14.902 20:29:40 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:14.902 20:29:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:14.902 20:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.902 20:29:40 -- common/autotest_common.sh@10 -- # set +x 00:31:14.902 ************************************ 00:31:14.902 START TEST nvmf_abort_qd_sizes 00:31:14.902 ************************************ 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:14.902 * Looking for test storage... 00:31:14.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:14.902 20:29:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:20.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:20.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:20.166 Found net devices under 0000:af:00.0: cvl_0_0 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:20.166 Found net devices under 0000:af:00.1: cvl_0_1 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:20.166 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:20.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:31:20.167 00:31:20.167 --- 10.0.0.2 ping statistics --- 00:31:20.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.167 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:31:20.167 00:31:20.167 --- 10.0.0.1 ping statistics --- 00:31:20.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.167 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:20.167 20:29:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:22.695 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:22.695 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:22.954 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:23.889 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=249672 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 249672 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 249672 ']' 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:23.889 20:29:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:24.148 [2024-07-15 20:29:49.272969] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:31:24.148 [2024-07-15 20:29:49.273026] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.148 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.148 [2024-07-15 20:29:49.359903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:24.148 [2024-07-15 20:29:49.452351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.148 [2024-07-15 20:29:49.452394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.148 [2024-07-15 20:29:49.452405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.149 [2024-07-15 20:29:49.452413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.149 [2024-07-15 20:29:49.452421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.149 [2024-07-15 20:29:49.452462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.149 [2024-07-15 20:29:49.452564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:24.149 [2024-07-15 20:29:49.452659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.149 [2024-07-15 20:29:49.452662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.085 20:29:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:25.085 ************************************ 00:31:25.085 START TEST spdk_target_abort 00:31:25.085 ************************************ 00:31:25.085 20:29:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:25.085 20:29:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:25.085 20:29:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:31:25.085 20:29:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.085 20:29:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.371 spdk_targetn1 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.371 [2024-07-15 20:29:53.166914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:28.371 [2024-07-15 20:29:53.199780] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:28.371 20:29:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:28.371 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.658 Initializing NVMe Controllers 00:31:31.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:31.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:31.658 Initialization complete. Launching workers. 00:31:31.658 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14455, failed: 0 00:31:31.658 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1604, failed to submit 12851 00:31:31.658 success 744, unsuccess 860, failed 0 00:31:31.658 20:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:31.658 20:29:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:31.658 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.943 Initializing NVMe Controllers 00:31:34.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:34.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:34.943 Initialization complete. Launching workers. 00:31:34.943 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8506, failed: 0 00:31:34.943 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7265 00:31:34.943 success 309, unsuccess 932, failed 0 00:31:34.943 20:29:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:34.943 20:29:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:34.943 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.230 Initializing NVMe Controllers 00:31:38.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:38.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:38.230 Initialization complete. Launching workers. 00:31:38.230 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38648, failed: 0 00:31:38.230 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2592, failed to submit 36056 00:31:38.230 success 588, unsuccess 2004, failed 0 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.230 20:30:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 249672 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 249672 ']' 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 249672 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 249672 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 249672' 00:31:39.163 killing process with pid 249672 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 249672 00:31:39.163 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 249672 00:31:39.424 00:31:39.424 real 0m14.273s 00:31:39.424 user 0m57.484s 00:31:39.424 sys 0m2.092s 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.424 ************************************ 00:31:39.424 END TEST spdk_target_abort 00:31:39.424 ************************************ 00:31:39.424 20:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:39.424 20:30:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:39.424 20:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:39.424 20:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.424 20:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.424 ************************************ 00:31:39.424 START TEST kernel_target_abort 00:31:39.424 ************************************ 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:39.424 20:30:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:42.054 Waiting for block devices as requested 00:31:42.054 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:31:42.313 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:42.313 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:42.313 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:42.313 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:42.573 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:42.573 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:42.573 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:42.832 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:42.832 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:42.832 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:42.832 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:43.091 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:43.091 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:43.091 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:43.350 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:43.350 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:43.350 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:43.350 No valid GPT data, bailing 00:31:43.609 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:43.610 00:31:43.610 Discovery Log Number of Records 2, Generation counter 2 00:31:43.610 =====Discovery Log Entry 0====== 00:31:43.610 trtype: tcp 00:31:43.610 adrfam: ipv4 00:31:43.610 subtype: current discovery subsystem 00:31:43.610 treq: not specified, sq flow control disable supported 00:31:43.610 portid: 1 00:31:43.610 trsvcid: 4420 00:31:43.610 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:43.610 traddr: 10.0.0.1 00:31:43.610 eflags: none 00:31:43.610 sectype: none 00:31:43.610 =====Discovery Log Entry 1====== 00:31:43.610 trtype: tcp 00:31:43.610 adrfam: ipv4 00:31:43.610 subtype: nvme subsystem 00:31:43.610 treq: not specified, sq flow control disable supported 00:31:43.610 portid: 1 00:31:43.610 trsvcid: 4420 00:31:43.610 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:43.610 traddr: 10.0.0.1 00:31:43.610 eflags: none 00:31:43.610 sectype: none 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.610 20:30:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.610 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.897 Initializing NVMe Controllers 00:31:46.897 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.897 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.897 Initialization complete. Launching workers. 00:31:46.897 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51691, failed: 0 00:31:46.897 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 51691, failed to submit 0 00:31:46.897 success 0, unsuccess 51691, failed 0 00:31:46.897 20:30:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.897 20:30:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.897 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.183 Initializing NVMe Controllers 00:31:50.184 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:50.184 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:50.184 Initialization complete. Launching workers. 00:31:50.184 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85737, failed: 0 00:31:50.184 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21618, failed to submit 64119 00:31:50.184 success 0, unsuccess 21618, failed 0 00:31:50.184 20:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:50.184 20:30:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:50.184 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.467 Initializing NVMe Controllers 00:31:53.467 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:53.467 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:53.467 Initialization complete. Launching workers. 00:31:53.467 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82436, failed: 0 00:31:53.467 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20586, failed to submit 61850 00:31:53.467 success 0, unsuccess 20586, failed 0 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:53.467 20:30:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:56.008 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:56.008 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:56.956 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:31:56.956 00:31:56.956 real 0m17.433s 00:31:56.956 user 0m8.549s 00:31:56.956 sys 0m4.963s 00:31:56.956 20:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:56.956 20:30:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.956 ************************************ 00:31:56.956 END TEST kernel_target_abort 00:31:56.956 ************************************ 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.956 rmmod nvme_tcp 00:31:56.956 rmmod nvme_fabrics 00:31:56.956 rmmod nvme_keyring 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 249672 ']' 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 249672 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 249672 ']' 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 249672 00:31:56.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (249672) - No such process 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 249672 is not found' 00:31:56.956 Process with pid 249672 is not found 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:56.956 20:30:22 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:59.491 Waiting for block devices as requested 00:31:59.750 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:31:59.750 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:59.750 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:00.008 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:00.008 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:00.008 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:00.266 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:00.266 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:00.266 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:00.266 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:00.525 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:00.525 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:00.525 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:00.525 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:00.784 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:00.784 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:00.784 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:01.044 20:30:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.951 20:30:28 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:02.951 00:32:02.951 real 0m48.164s 00:32:02.951 user 1m10.132s 00:32:02.951 sys 0m15.166s 00:32:02.951 20:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:02.951 20:30:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:02.951 ************************************ 00:32:02.951 END TEST nvmf_abort_qd_sizes 00:32:02.951 ************************************ 00:32:02.951 20:30:28 -- common/autotest_common.sh@1142 -- # return 0 00:32:02.951 20:30:28 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:02.951 20:30:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:02.951 20:30:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.951 20:30:28 -- common/autotest_common.sh@10 -- # set +x 00:32:02.951 ************************************ 00:32:02.951 START TEST keyring_file 00:32:02.951 ************************************ 00:32:02.951 20:30:28 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:03.210 * Looking for test storage... 00:32:03.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:03.210 20:30:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:03.210 20:30:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.210 20:30:28 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.210 20:30:28 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.210 20:30:28 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.210 20:30:28 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.210 20:30:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.210 20:30:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.211 20:30:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.211 20:30:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:03.211 20:30:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TeF3lGRsxk 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TeF3lGRsxk 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TeF3lGRsxk 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.TeF3lGRsxk 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yOhAV5oOwP 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:03.211 20:30:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yOhAV5oOwP 00:32:03.211 20:30:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yOhAV5oOwP 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yOhAV5oOwP 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=259505 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:03.211 20:30:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 259505 00:32:03.211 20:30:28 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 259505 ']' 00:32:03.211 20:30:28 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.211 20:30:28 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:03.211 20:30:28 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.211 20:30:28 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:03.211 20:30:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:03.470 [2024-07-15 20:30:28.572350] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:32:03.470 [2024-07-15 20:30:28.572411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259505 ] 00:32:03.470 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.470 [2024-07-15 20:30:28.653374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.470 [2024-07-15 20:30:28.743571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:04.408 20:30:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:04.408 [2024-07-15 20:30:29.507109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.408 null0 00:32:04.408 [2024-07-15 20:30:29.539152] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:04.408 [2024-07-15 20:30:29.539442] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:04.408 [2024-07-15 20:30:29.547157] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.408 20:30:29 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.408 20:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:04.409 [2024-07-15 20:30:29.559197] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:04.409 request: 00:32:04.409 { 00:32:04.409 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.409 "secure_channel": false, 00:32:04.409 "listen_address": { 00:32:04.409 "trtype": "tcp", 00:32:04.409 "traddr": "127.0.0.1", 00:32:04.409 "trsvcid": "4420" 00:32:04.409 }, 00:32:04.409 "method": "nvmf_subsystem_add_listener", 00:32:04.409 "req_id": 1 00:32:04.409 } 00:32:04.409 Got JSON-RPC error response 00:32:04.409 response: 00:32:04.409 { 00:32:04.409 "code": -32602, 00:32:04.409 "message": "Invalid parameters" 00:32:04.409 } 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:04.409 20:30:29 keyring_file -- keyring/file.sh@46 -- # bperfpid=259608 00:32:04.409 20:30:29 keyring_file -- keyring/file.sh@48 -- # waitforlisten 259608 /var/tmp/bperf.sock 00:32:04.409 20:30:29 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 259608 ']' 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:04.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:04.409 20:30:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:04.409 [2024-07-15 20:30:29.615806] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:32:04.409 [2024-07-15 20:30:29.615869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid259608 ] 00:32:04.409 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.409 [2024-07-15 20:30:29.686000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.668 [2024-07-15 20:30:29.773862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.668 20:30:29 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:04.668 20:30:29 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:04.668 20:30:29 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:04.668 20:30:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:04.927 20:30:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yOhAV5oOwP 00:32:04.927 20:30:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yOhAV5oOwP 00:32:05.186 20:30:30 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:05.186 20:30:30 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:05.186 20:30:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.186 20:30:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.186 20:30:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.445 20:30:30 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.TeF3lGRsxk == \/\t\m\p\/\t\m\p\.\T\e\F\3\l\G\R\s\x\k ]] 00:32:05.445 20:30:30 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:05.445 20:30:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:05.445 20:30:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:05.445 20:30:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.445 20:30:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.704 20:30:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yOhAV5oOwP == \/\t\m\p\/\t\m\p\.\y\O\h\A\V\5\o\O\w\P ]] 00:32:05.704 20:30:30 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:05.704 20:30:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.704 20:30:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.704 20:30:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.704 20:30:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.704 20:30:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.963 20:30:31 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:05.963 20:30:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:05.963 20:30:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:05.963 20:30:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.963 20:30:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:05.963 20:30:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.963 20:30:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.222 20:30:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:06.222 20:30:31 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.222 20:30:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.480 [2024-07-15 20:30:31.589512] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:06.480 nvme0n1 00:32:06.480 20:30:31 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:06.480 20:30:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:06.480 20:30:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:06.480 20:30:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:06.480 20:30:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.480 20:30:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.049 20:30:32 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:07.049 20:30:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:07.049 20:30:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:07.049 20:30:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.049 20:30:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.049 20:30:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:07.049 20:30:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.308 20:30:32 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:07.308 20:30:32 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:07.308 Running I/O for 1 seconds... 00:32:08.245 00:32:08.245 Latency(us) 00:32:08.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.245 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:08.245 nvme0n1 : 1.01 10202.78 39.85 0.00 0.00 12498.68 4051.32 18707.55 00:32:08.245 =================================================================================================================== 00:32:08.246 Total : 10202.78 39.85 0.00 0.00 12498.68 4051.32 18707.55 00:32:08.246 0 00:32:08.246 20:30:33 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:08.246 20:30:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:08.504 20:30:33 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:08.504 20:30:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:08.504 20:30:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.504 20:30:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.504 20:30:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:08.504 20:30:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.763 20:30:34 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:08.763 20:30:34 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:08.763 20:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:08.763 20:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.763 20:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.763 20:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.763 20:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:09.022 20:30:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:09.022 20:30:34 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:09.022 20:30:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:09.022 20:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:09.281 [2024-07-15 20:30:34.527842] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:09.281 [2024-07-15 20:30:34.528472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227e2e0 (107): Transport endpoint is not connected 00:32:09.281 [2024-07-15 20:30:34.529463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227e2e0 (9): Bad file descriptor 00:32:09.281 [2024-07-15 20:30:34.530463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:09.281 [2024-07-15 20:30:34.530477] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:09.281 [2024-07-15 20:30:34.530488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:09.281 request: 00:32:09.281 { 00:32:09.281 "name": "nvme0", 00:32:09.281 "trtype": "tcp", 00:32:09.281 "traddr": "127.0.0.1", 00:32:09.281 "adrfam": "ipv4", 00:32:09.281 "trsvcid": "4420", 00:32:09.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:09.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:09.281 "prchk_reftag": false, 00:32:09.281 "prchk_guard": false, 00:32:09.281 "hdgst": false, 00:32:09.281 "ddgst": false, 00:32:09.281 "psk": "key1", 00:32:09.281 "method": "bdev_nvme_attach_controller", 00:32:09.281 "req_id": 1 00:32:09.281 } 00:32:09.281 Got JSON-RPC error response 00:32:09.281 response: 00:32:09.281 { 00:32:09.281 "code": -5, 00:32:09.281 "message": "Input/output error" 00:32:09.281 } 00:32:09.281 20:30:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:09.281 20:30:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:09.281 20:30:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:09.281 20:30:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:09.281 20:30:34 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:09.281 20:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:09.281 20:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:09.281 20:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.281 20:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:09.281 20:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.541 20:30:34 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:09.541 20:30:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:09.541 20:30:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:09.541 20:30:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:09.541 20:30:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:09.541 20:30:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:09.541 20:30:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:09.799 20:30:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:09.799 20:30:35 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:09.799 20:30:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:10.058 20:30:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:10.058 20:30:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:10.316 20:30:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:10.316 20:30:35 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:10.316 20:30:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:10.575 20:30:35 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:10.575 20:30:35 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.TeF3lGRsxk 00:32:10.575 20:30:35 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:10.575 20:30:35 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:10.575 20:30:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:10.832 [2024-07-15 20:30:35.952338] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TeF3lGRsxk': 0100660 00:32:10.832 [2024-07-15 20:30:35.952367] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:10.832 request: 00:32:10.832 { 00:32:10.832 "name": "key0", 00:32:10.832 "path": "/tmp/tmp.TeF3lGRsxk", 00:32:10.832 "method": "keyring_file_add_key", 00:32:10.832 "req_id": 1 00:32:10.832 } 00:32:10.832 Got JSON-RPC error response 00:32:10.832 response: 00:32:10.832 { 00:32:10.832 "code": -1, 00:32:10.832 "message": "Operation not permitted" 00:32:10.832 } 00:32:10.832 20:30:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:10.832 20:30:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:10.832 20:30:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:10.832 20:30:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:10.832 20:30:35 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.TeF3lGRsxk 00:32:10.832 20:30:35 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:10.832 20:30:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TeF3lGRsxk 00:32:11.091 20:30:36 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.TeF3lGRsxk 00:32:11.091 20:30:36 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:11.091 20:30:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:11.091 20:30:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:11.091 20:30:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:11.091 20:30:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.091 20:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.350 20:30:36 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:11.350 20:30:36 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.350 20:30:36 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.350 20:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:11.350 [2024-07-15 20:30:36.690334] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.TeF3lGRsxk': No such file or directory 00:32:11.350 [2024-07-15 20:30:36.690357] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:11.350 [2024-07-15 20:30:36.690387] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:11.350 [2024-07-15 20:30:36.690396] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:11.350 [2024-07-15 20:30:36.690404] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:11.350 request: 00:32:11.350 { 00:32:11.350 "name": "nvme0", 00:32:11.350 "trtype": "tcp", 00:32:11.350 "traddr": "127.0.0.1", 00:32:11.350 "adrfam": "ipv4", 00:32:11.350 "trsvcid": "4420", 00:32:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:11.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:11.350 "prchk_reftag": false, 00:32:11.350 "prchk_guard": false, 00:32:11.350 "hdgst": false, 00:32:11.350 "ddgst": false, 00:32:11.350 "psk": "key0", 00:32:11.350 "method": "bdev_nvme_attach_controller", 00:32:11.350 "req_id": 1 00:32:11.350 } 00:32:11.350 Got JSON-RPC error response 00:32:11.350 response: 00:32:11.350 { 00:32:11.350 "code": -19, 00:32:11.350 "message": "No such device" 00:32:11.350 } 00:32:11.610 20:30:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:11.610 20:30:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:11.610 20:30:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:11.610 20:30:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:11.610 20:30:36 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:11.610 20:30:36 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rFR5jbp31O 00:32:11.610 20:30:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:11.869 20:30:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:11.869 20:30:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:11.869 20:30:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:11.869 20:30:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:11.869 20:30:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:11.869 20:30:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:11.869 20:30:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rFR5jbp31O 00:32:11.869 20:30:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rFR5jbp31O 00:32:11.869 20:30:37 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.rFR5jbp31O 00:32:11.869 20:30:37 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rFR5jbp31O 00:32:11.869 20:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rFR5jbp31O 00:32:12.128 20:30:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:12.128 20:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:12.388 nvme0n1 00:32:12.388 20:30:37 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:12.388 20:30:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:12.388 20:30:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:12.388 20:30:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.388 20:30:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:12.388 20:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.647 20:30:37 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:12.647 20:30:37 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:12.647 20:30:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:12.905 20:30:38 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:12.905 20:30:38 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:12.905 20:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.905 20:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.905 20:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.164 20:30:38 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:13.164 20:30:38 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:13.164 20:30:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:13.164 20:30:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:13.164 20:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:13.164 20:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.164 20:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:13.422 20:30:38 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:13.422 20:30:38 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:13.422 20:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:13.680 20:30:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:13.680 20:30:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:13.680 20:30:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.938 20:30:39 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:13.938 20:30:39 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rFR5jbp31O 00:32:13.938 20:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rFR5jbp31O 00:32:14.212 20:30:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yOhAV5oOwP 00:32:14.212 20:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yOhAV5oOwP 00:32:14.212 20:30:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:14.212 20:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:14.532 nvme0n1 00:32:14.532 20:30:39 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:14.532 20:30:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:14.817 20:30:40 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:14.817 "subsystems": [ 00:32:14.817 { 00:32:14.817 "subsystem": "keyring", 00:32:14.817 "config": [ 00:32:14.817 { 00:32:14.817 "method": "keyring_file_add_key", 00:32:14.817 "params": { 00:32:14.817 "name": "key0", 00:32:14.817 "path": "/tmp/tmp.rFR5jbp31O" 00:32:14.817 } 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "method": "keyring_file_add_key", 00:32:14.817 "params": { 00:32:14.817 "name": "key1", 00:32:14.817 "path": "/tmp/tmp.yOhAV5oOwP" 00:32:14.817 } 00:32:14.817 } 00:32:14.817 ] 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "subsystem": "iobuf", 00:32:14.817 "config": [ 00:32:14.817 { 00:32:14.817 "method": "iobuf_set_options", 00:32:14.817 "params": { 00:32:14.817 "small_pool_count": 8192, 00:32:14.817 "large_pool_count": 1024, 00:32:14.817 "small_bufsize": 8192, 00:32:14.817 "large_bufsize": 135168 00:32:14.817 } 00:32:14.817 } 00:32:14.817 ] 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "subsystem": "sock", 00:32:14.817 "config": [ 00:32:14.817 { 00:32:14.817 "method": "sock_set_default_impl", 00:32:14.817 "params": { 00:32:14.817 "impl_name": "posix" 00:32:14.817 } 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "method": "sock_impl_set_options", 00:32:14.817 "params": { 00:32:14.817 "impl_name": "ssl", 00:32:14.817 "recv_buf_size": 4096, 00:32:14.817 "send_buf_size": 4096, 00:32:14.817 "enable_recv_pipe": true, 00:32:14.817 "enable_quickack": false, 00:32:14.817 "enable_placement_id": 0, 00:32:14.817 "enable_zerocopy_send_server": true, 00:32:14.817 "enable_zerocopy_send_client": false, 00:32:14.817 "zerocopy_threshold": 0, 00:32:14.817 "tls_version": 0, 00:32:14.817 "enable_ktls": false 00:32:14.817 } 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "method": "sock_impl_set_options", 00:32:14.817 "params": { 00:32:14.817 "impl_name": "posix", 00:32:14.817 "recv_buf_size": 2097152, 00:32:14.817 "send_buf_size": 2097152, 00:32:14.817 "enable_recv_pipe": true, 00:32:14.817 "enable_quickack": false, 00:32:14.817 "enable_placement_id": 0, 00:32:14.817 "enable_zerocopy_send_server": true, 00:32:14.817 "enable_zerocopy_send_client": false, 00:32:14.817 "zerocopy_threshold": 0, 00:32:14.817 "tls_version": 0, 00:32:14.817 "enable_ktls": false 00:32:14.817 } 00:32:14.817 } 00:32:14.817 ] 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "subsystem": "vmd", 00:32:14.817 "config": [] 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "subsystem": "accel", 00:32:14.817 "config": [ 00:32:14.817 { 00:32:14.817 "method": "accel_set_options", 00:32:14.817 "params": { 00:32:14.817 "small_cache_size": 128, 00:32:14.817 "large_cache_size": 16, 00:32:14.817 "task_count": 2048, 00:32:14.817 "sequence_count": 2048, 00:32:14.817 "buf_count": 2048 00:32:14.817 } 00:32:14.817 } 00:32:14.817 ] 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "subsystem": "bdev", 00:32:14.817 "config": [ 00:32:14.817 { 00:32:14.817 "method": "bdev_set_options", 00:32:14.817 "params": { 00:32:14.817 "bdev_io_pool_size": 65535, 00:32:14.817 "bdev_io_cache_size": 256, 00:32:14.817 "bdev_auto_examine": true, 00:32:14.817 "iobuf_small_cache_size": 128, 00:32:14.817 "iobuf_large_cache_size": 16 00:32:14.817 } 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "method": "bdev_raid_set_options", 00:32:14.817 "params": { 00:32:14.817 "process_window_size_kb": 1024 00:32:14.817 } 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "method": "bdev_iscsi_set_options", 00:32:14.817 "params": { 00:32:14.817 "timeout_sec": 30 00:32:14.817 } 00:32:14.817 }, 00:32:14.817 { 00:32:14.817 "method": "bdev_nvme_set_options", 00:32:14.817 "params": { 00:32:14.817 "action_on_timeout": "none", 00:32:14.817 "timeout_us": 0, 00:32:14.817 "timeout_admin_us": 0, 00:32:14.817 "keep_alive_timeout_ms": 10000, 00:32:14.817 "arbitration_burst": 0, 00:32:14.817 "low_priority_weight": 0, 00:32:14.818 "medium_priority_weight": 0, 00:32:14.818 "high_priority_weight": 0, 00:32:14.818 "nvme_adminq_poll_period_us": 10000, 00:32:14.818 "nvme_ioq_poll_period_us": 0, 00:32:14.818 "io_queue_requests": 512, 00:32:14.818 "delay_cmd_submit": true, 00:32:14.818 "transport_retry_count": 4, 00:32:14.818 "bdev_retry_count": 3, 00:32:14.818 "transport_ack_timeout": 0, 00:32:14.818 "ctrlr_loss_timeout_sec": 0, 00:32:14.818 "reconnect_delay_sec": 0, 00:32:14.818 "fast_io_fail_timeout_sec": 0, 00:32:14.818 "disable_auto_failback": false, 00:32:14.818 "generate_uuids": false, 00:32:14.818 "transport_tos": 0, 00:32:14.818 "nvme_error_stat": false, 00:32:14.818 "rdma_srq_size": 0, 00:32:14.818 "io_path_stat": false, 00:32:14.818 "allow_accel_sequence": false, 00:32:14.818 "rdma_max_cq_size": 0, 00:32:14.818 "rdma_cm_event_timeout_ms": 0, 00:32:14.818 "dhchap_digests": [ 00:32:14.818 "sha256", 00:32:14.818 "sha384", 00:32:14.818 "sha512" 00:32:14.818 ], 00:32:14.818 "dhchap_dhgroups": [ 00:32:14.818 "null", 00:32:14.818 "ffdhe2048", 00:32:14.818 "ffdhe3072", 00:32:14.818 "ffdhe4096", 00:32:14.818 "ffdhe6144", 00:32:14.818 "ffdhe8192" 00:32:14.818 ] 00:32:14.818 } 00:32:14.818 }, 00:32:14.818 { 00:32:14.818 "method": "bdev_nvme_attach_controller", 00:32:14.818 "params": { 00:32:14.818 "name": "nvme0", 00:32:14.818 "trtype": "TCP", 00:32:14.818 "adrfam": "IPv4", 00:32:14.818 "traddr": "127.0.0.1", 00:32:14.818 "trsvcid": "4420", 00:32:14.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.818 "prchk_reftag": false, 00:32:14.818 "prchk_guard": false, 00:32:14.818 "ctrlr_loss_timeout_sec": 0, 00:32:14.818 "reconnect_delay_sec": 0, 00:32:14.818 "fast_io_fail_timeout_sec": 0, 00:32:14.818 "psk": "key0", 00:32:14.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.818 "hdgst": false, 00:32:14.818 "ddgst": false 00:32:14.818 } 00:32:14.818 }, 00:32:14.818 { 00:32:14.818 "method": "bdev_nvme_set_hotplug", 00:32:14.818 "params": { 00:32:14.818 "period_us": 100000, 00:32:14.818 "enable": false 00:32:14.818 } 00:32:14.818 }, 00:32:14.818 { 00:32:14.818 "method": "bdev_wait_for_examine" 00:32:14.818 } 00:32:14.818 ] 00:32:14.818 }, 00:32:14.818 { 00:32:14.818 "subsystem": "nbd", 00:32:14.818 "config": [] 00:32:14.818 } 00:32:14.818 ] 00:32:14.818 }' 00:32:14.818 20:30:40 keyring_file -- keyring/file.sh@114 -- # killprocess 259608 00:32:14.818 20:30:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 259608 ']' 00:32:14.818 20:30:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 259608 00:32:14.818 20:30:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:14.818 20:30:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:14.818 20:30:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 259608 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 259608' 00:32:15.077 killing process with pid 259608 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@967 -- # kill 259608 00:32:15.077 Received shutdown signal, test time was about 1.000000 seconds 00:32:15.077 00:32:15.077 Latency(us) 00:32:15.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.077 =================================================================================================================== 00:32:15.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@972 -- # wait 259608 00:32:15.077 20:30:40 keyring_file -- keyring/file.sh@117 -- # bperfpid=261581 00:32:15.077 20:30:40 keyring_file -- keyring/file.sh@119 -- # waitforlisten 261581 /var/tmp/bperf.sock 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 261581 ']' 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:15.077 20:30:40 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:15.077 20:30:40 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:15.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:15.077 20:30:40 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:15.077 "subsystems": [ 00:32:15.077 { 00:32:15.077 "subsystem": "keyring", 00:32:15.077 "config": [ 00:32:15.077 { 00:32:15.077 "method": "keyring_file_add_key", 00:32:15.077 "params": { 00:32:15.077 "name": "key0", 00:32:15.077 "path": "/tmp/tmp.rFR5jbp31O" 00:32:15.077 } 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "method": "keyring_file_add_key", 00:32:15.077 "params": { 00:32:15.077 "name": "key1", 00:32:15.077 "path": "/tmp/tmp.yOhAV5oOwP" 00:32:15.077 } 00:32:15.077 } 00:32:15.077 ] 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "subsystem": "iobuf", 00:32:15.077 "config": [ 00:32:15.077 { 00:32:15.077 "method": "iobuf_set_options", 00:32:15.077 "params": { 00:32:15.077 "small_pool_count": 8192, 00:32:15.077 "large_pool_count": 1024, 00:32:15.077 "small_bufsize": 8192, 00:32:15.077 "large_bufsize": 135168 00:32:15.077 } 00:32:15.077 } 00:32:15.077 ] 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "subsystem": "sock", 00:32:15.077 "config": [ 00:32:15.077 { 00:32:15.077 "method": "sock_set_default_impl", 00:32:15.077 "params": { 00:32:15.077 "impl_name": "posix" 00:32:15.077 } 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "method": "sock_impl_set_options", 00:32:15.077 "params": { 00:32:15.077 "impl_name": "ssl", 00:32:15.077 "recv_buf_size": 4096, 00:32:15.077 "send_buf_size": 4096, 00:32:15.077 "enable_recv_pipe": true, 00:32:15.077 "enable_quickack": false, 00:32:15.077 "enable_placement_id": 0, 00:32:15.077 "enable_zerocopy_send_server": true, 00:32:15.077 "enable_zerocopy_send_client": false, 00:32:15.077 "zerocopy_threshold": 0, 00:32:15.077 "tls_version": 0, 00:32:15.077 "enable_ktls": false 00:32:15.077 } 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "method": "sock_impl_set_options", 00:32:15.077 "params": { 00:32:15.077 "impl_name": "posix", 00:32:15.077 "recv_buf_size": 2097152, 00:32:15.077 "send_buf_size": 2097152, 00:32:15.077 "enable_recv_pipe": true, 00:32:15.077 "enable_quickack": false, 00:32:15.077 "enable_placement_id": 0, 00:32:15.077 "enable_zerocopy_send_server": true, 00:32:15.077 "enable_zerocopy_send_client": false, 00:32:15.077 "zerocopy_threshold": 0, 00:32:15.077 "tls_version": 0, 00:32:15.077 "enable_ktls": false 00:32:15.077 } 00:32:15.077 } 00:32:15.077 ] 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "subsystem": "vmd", 00:32:15.077 "config": [] 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "subsystem": "accel", 00:32:15.077 "config": [ 00:32:15.077 { 00:32:15.077 "method": "accel_set_options", 00:32:15.077 "params": { 00:32:15.077 "small_cache_size": 128, 00:32:15.077 "large_cache_size": 16, 00:32:15.077 "task_count": 2048, 00:32:15.077 "sequence_count": 2048, 00:32:15.077 "buf_count": 2048 00:32:15.077 } 00:32:15.077 } 00:32:15.077 ] 00:32:15.077 }, 00:32:15.077 { 00:32:15.077 "subsystem": "bdev", 00:32:15.077 "config": [ 00:32:15.077 { 00:32:15.077 "method": "bdev_set_options", 00:32:15.077 "params": { 00:32:15.077 "bdev_io_pool_size": 65535, 00:32:15.077 "bdev_io_cache_size": 256, 00:32:15.077 "bdev_auto_examine": true, 00:32:15.077 "iobuf_small_cache_size": 128, 00:32:15.077 "iobuf_large_cache_size": 16 00:32:15.077 } 00:32:15.077 }, 00:32:15.078 { 00:32:15.078 "method": "bdev_raid_set_options", 00:32:15.078 "params": { 00:32:15.078 "process_window_size_kb": 1024 00:32:15.078 } 00:32:15.078 }, 00:32:15.078 { 00:32:15.078 "method": "bdev_iscsi_set_options", 00:32:15.078 "params": { 00:32:15.078 "timeout_sec": 30 00:32:15.078 } 00:32:15.078 }, 00:32:15.078 { 00:32:15.078 "method": "bdev_nvme_set_options", 00:32:15.078 "params": { 00:32:15.078 "action_on_timeout": "none", 00:32:15.078 "timeout_us": 0, 00:32:15.078 "timeout_admin_us": 0, 00:32:15.078 "keep_alive_timeout_ms": 10000, 00:32:15.078 "arbitration_burst": 0, 00:32:15.078 "low_priority_weight": 0, 00:32:15.078 "medium_priority_weight": 0, 00:32:15.078 "high_priority_weight": 0, 00:32:15.078 "nvme_adminq_poll_period_us": 10000, 00:32:15.078 "nvme_ioq_poll_period_us": 0, 00:32:15.078 "io_queue_requests": 512, 00:32:15.078 "delay_cmd_submit": true, 00:32:15.078 "transport_retry_count": 4, 00:32:15.078 "bdev_retry_count": 3, 00:32:15.078 "transport_ack_timeout": 0, 00:32:15.078 "ctrlr_loss_timeout_sec": 0, 00:32:15.078 "reconnect_delay_sec": 0, 00:32:15.078 "fast_io_fail_timeout_sec": 0, 00:32:15.078 "disable_auto_failback": false, 00:32:15.078 "generate_uuids": false, 00:32:15.078 "transport_tos": 0, 00:32:15.078 "nvme_error_stat": false, 00:32:15.078 "rdma_srq_size": 0, 00:32:15.078 "io_path_stat": false, 00:32:15.078 "allow_accel_sequence": false, 00:32:15.078 "rdma_max_cq_size": 0, 00:32:15.078 "rdma_cm_event_timeout_ms": 0, 00:32:15.078 "dhchap_digests": [ 00:32:15.078 "sha256", 00:32:15.078 "sha384", 00:32:15.078 "sha512" 00:32:15.078 ], 00:32:15.078 "dhchap_dhgroups": [ 00:32:15.078 "null", 00:32:15.078 "ffdhe2048", 00:32:15.078 "ffdhe3072", 00:32:15.078 "ffdhe4096", 00:32:15.078 "ffdhe6144", 00:32:15.078 "ffdhe8192" 00:32:15.078 ] 00:32:15.078 } 00:32:15.078 }, 00:32:15.078 { 00:32:15.078 "method": "bdev_nvme_attach_controller", 00:32:15.078 "params": { 00:32:15.078 "name": "nvme0", 00:32:15.078 "trtype": "TCP", 00:32:15.078 "adrfam": "IPv4", 00:32:15.078 "traddr": "127.0.0.1", 00:32:15.078 "trsvcid": "4420", 00:32:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.078 "prchk_reftag": false, 00:32:15.078 "prchk_guard": false, 00:32:15.078 "ctrlr_loss_timeout_sec": 0, 00:32:15.078 "reconnect_delay_sec": 0, 00:32:15.078 "fast_io_fail_timeout_sec": 0, 00:32:15.078 "psk": "key0", 00:32:15.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.078 "hdgst": false, 00:32:15.078 "ddgst": false 00:32:15.078 } 00:32:15.078 }, 00:32:15.078 { 00:32:15.078 "method": "bdev_nvme_set_hotplug", 00:32:15.078 "params": { 00:32:15.078 "period_us": 100000, 00:32:15.078 "enable": false 00:32:15.078 } 00:32:15.078 }, 00:32:15.078 { 00:32:15.078 "method": "bdev_wait_for_examine" 00:32:15.078 } 00:32:15.078 ] 00:32:15.078 }, 00:32:15.078 { 00:32:15.078 "subsystem": "nbd", 00:32:15.078 "config": [] 00:32:15.078 } 00:32:15.078 ] 00:32:15.078 }' 00:32:15.078 20:30:40 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:15.078 20:30:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:15.337 [2024-07-15 20:30:40.427963] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:32:15.337 [2024-07-15 20:30:40.428024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261581 ] 00:32:15.337 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.337 [2024-07-15 20:30:40.498947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.337 [2024-07-15 20:30:40.590884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.611 [2024-07-15 20:30:40.757210] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:16.179 20:30:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.179 20:30:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:16.179 20:30:41 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:16.179 20:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.179 20:30:41 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:16.437 20:30:41 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:16.437 20:30:41 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:16.437 20:30:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:16.437 20:30:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.437 20:30:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.437 20:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.437 20:30:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.694 20:30:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:16.694 20:30:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:16.694 20:30:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:16.694 20:30:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:16.694 20:30:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.694 20:30:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:16.694 20:30:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:16.694 20:30:42 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:16.952 20:30:42 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:16.952 20:30:42 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:16.952 20:30:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:16.952 20:30:42 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:16.952 20:30:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:16.952 20:30:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.rFR5jbp31O /tmp/tmp.yOhAV5oOwP 00:32:16.952 20:30:42 keyring_file -- keyring/file.sh@20 -- # killprocess 261581 00:32:16.952 20:30:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 261581 ']' 00:32:16.952 20:30:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 261581 00:32:16.952 20:30:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261581 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261581' 00:32:17.210 killing process with pid 261581 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@967 -- # kill 261581 00:32:17.210 Received shutdown signal, test time was about 1.000000 seconds 00:32:17.210 00:32:17.210 Latency(us) 00:32:17.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.210 =================================================================================================================== 00:32:17.210 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@972 -- # wait 261581 00:32:17.210 20:30:42 keyring_file -- keyring/file.sh@21 -- # killprocess 259505 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 259505 ']' 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 259505 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:17.210 20:30:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 259505 00:32:17.519 20:30:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:17.519 20:30:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:17.519 20:30:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 259505' 00:32:17.519 killing process with pid 259505 00:32:17.519 20:30:42 keyring_file -- common/autotest_common.sh@967 -- # kill 259505 00:32:17.519 [2024-07-15 20:30:42.591600] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:17.519 20:30:42 keyring_file -- common/autotest_common.sh@972 -- # wait 259505 00:32:17.777 00:32:17.777 real 0m14.648s 00:32:17.777 user 0m36.325s 00:32:17.777 sys 0m3.161s 00:32:17.777 20:30:42 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:17.777 20:30:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.777 ************************************ 00:32:17.777 END TEST keyring_file 00:32:17.777 ************************************ 00:32:17.777 20:30:42 -- common/autotest_common.sh@1142 -- # return 0 00:32:17.777 20:30:42 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:17.777 20:30:42 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:17.777 20:30:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:17.777 20:30:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.777 20:30:42 -- common/autotest_common.sh@10 -- # set +x 00:32:17.777 ************************************ 00:32:17.777 START TEST keyring_linux 00:32:17.777 ************************************ 00:32:17.777 20:30:43 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:17.777 * Looking for test storage... 00:32:17.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.777 20:30:43 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.777 20:30:43 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.777 20:30:43 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.777 20:30:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.777 20:30:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.777 20:30:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.777 20:30:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:17.777 20:30:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:17.777 20:30:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:17.777 20:30:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:17.777 20:30:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:18.036 /tmp/:spdk-test:key0 00:32:18.036 20:30:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:18.036 20:30:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:18.036 20:30:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:18.036 20:30:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:18.036 20:30:43 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:18.036 20:30:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:18.036 20:30:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:18.036 20:30:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:18.036 /tmp/:spdk-test:key1 00:32:18.036 20:30:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:18.036 20:30:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=262190 00:32:18.036 20:30:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 262190 00:32:18.036 20:30:43 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 262190 ']' 00:32:18.036 20:30:43 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:18.036 20:30:43 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.036 20:30:43 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:18.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:18.036 20:30:43 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.036 20:30:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:18.036 [2024-07-15 20:30:43.238808] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:32:18.036 [2024-07-15 20:30:43.238853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262190 ] 00:32:18.036 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.036 [2024-07-15 20:30:43.301435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.294 [2024-07-15 20:30:43.389157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.294 20:30:43 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.294 20:30:43 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:18.294 20:30:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:18.294 20:30:43 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.294 20:30:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:18.294 [2024-07-15 20:30:43.604754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.294 null0 00:32:18.294 [2024-07-15 20:30:43.636796] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:18.294 [2024-07-15 20:30:43.637185] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.551 20:30:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:18.551 503864355 00:32:18.551 20:30:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:18.551 372156478 00:32:18.551 20:30:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=262287 00:32:18.551 20:30:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 262287 /var/tmp/bperf.sock 00:32:18.551 20:30:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 262287 ']' 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:18.551 20:30:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:18.551 [2024-07-15 20:30:43.712442] Starting SPDK v24.09-pre git sha1 24018edd4 / DPDK 24.03.0 initialization... 00:32:18.551 [2024-07-15 20:30:43.712497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262287 ] 00:32:18.551 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.551 [2024-07-15 20:30:43.783671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.551 [2024-07-15 20:30:43.875218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.808 20:30:43 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:18.808 20:30:43 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:18.808 20:30:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:18.808 20:30:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:19.067 20:30:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:19.067 20:30:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:19.325 20:30:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:19.325 20:30:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:19.583 [2024-07-15 20:30:44.704383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:19.583 nvme0n1 00:32:19.583 20:30:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:19.583 20:30:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:19.583 20:30:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:19.583 20:30:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:19.583 20:30:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.583 20:30:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:19.843 20:30:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:19.843 20:30:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:19.843 20:30:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:19.843 20:30:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:19.843 20:30:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.843 20:30:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.843 20:30:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@25 -- # sn=503864355 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 503864355 == \5\0\3\8\6\4\3\5\5 ]] 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 503864355 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:20.101 20:30:45 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.101 Running I/O for 1 seconds... 00:32:21.480 00:32:21.480 Latency(us) 00:32:21.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.480 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:21.480 nvme0n1 : 1.01 11633.98 45.45 0.00 0.00 10938.36 7626.01 19065.02 00:32:21.480 =================================================================================================================== 00:32:21.480 Total : 11633.98 45.45 0.00 0.00 10938.36 7626.01 19065.02 00:32:21.480 0 00:32:21.480 20:30:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:21.480 20:30:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:21.480 20:30:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:21.480 20:30:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:21.480 20:30:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:21.480 20:30:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:21.480 20:30:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:21.480 20:30:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.739 20:30:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:21.739 20:30:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:21.739 20:30:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:21.739 20:30:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.739 20:30:46 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:21.739 20:30:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:21.998 [2024-07-15 20:30:47.171190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:21.998 [2024-07-15 20:30:47.171784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc89210 (107): Transport endpoint is not connected 00:32:21.998 [2024-07-15 20:30:47.172776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc89210 (9): Bad file descriptor 00:32:21.998 [2024-07-15 20:30:47.173777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:21.998 [2024-07-15 20:30:47.173789] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:21.998 [2024-07-15 20:30:47.173799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:21.998 request: 00:32:21.998 { 00:32:21.998 "name": "nvme0", 00:32:21.998 "trtype": "tcp", 00:32:21.998 "traddr": "127.0.0.1", 00:32:21.998 "adrfam": "ipv4", 00:32:21.998 "trsvcid": "4420", 00:32:21.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.998 "prchk_reftag": false, 00:32:21.998 "prchk_guard": false, 00:32:21.998 "hdgst": false, 00:32:21.998 "ddgst": false, 00:32:21.998 "psk": ":spdk-test:key1", 00:32:21.998 "method": "bdev_nvme_attach_controller", 00:32:21.998 "req_id": 1 00:32:21.998 } 00:32:21.998 Got JSON-RPC error response 00:32:21.998 response: 00:32:21.998 { 00:32:21.998 "code": -5, 00:32:21.998 "message": "Input/output error" 00:32:21.998 } 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@33 -- # sn=503864355 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 503864355 00:32:21.998 1 links removed 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@33 -- # sn=372156478 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 372156478 00:32:21.998 1 links removed 00:32:21.998 20:30:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 262287 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 262287 ']' 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 262287 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 262287 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 262287' 00:32:21.998 killing process with pid 262287 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@967 -- # kill 262287 00:32:21.998 Received shutdown signal, test time was about 1.000000 seconds 00:32:21.998 00:32:21.998 Latency(us) 00:32:21.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.998 =================================================================================================================== 00:32:21.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:21.998 20:30:47 keyring_linux -- common/autotest_common.sh@972 -- # wait 262287 00:32:22.258 20:30:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 262190 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 262190 ']' 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 262190 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 262190 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 262190' 00:32:22.258 killing process with pid 262190 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@967 -- # kill 262190 00:32:22.258 20:30:47 keyring_linux -- common/autotest_common.sh@972 -- # wait 262190 00:32:22.516 00:32:22.516 real 0m4.833s 00:32:22.516 user 0m9.289s 00:32:22.516 sys 0m1.558s 00:32:22.516 20:30:47 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:22.516 20:30:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:22.516 ************************************ 00:32:22.516 END TEST keyring_linux 00:32:22.516 ************************************ 00:32:22.775 20:30:47 -- common/autotest_common.sh@1142 -- # return 0 00:32:22.775 20:30:47 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:22.775 20:30:47 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:22.775 20:30:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:22.775 20:30:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:22.775 20:30:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:22.775 20:30:47 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:22.775 20:30:47 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:22.775 20:30:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:22.775 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:32:22.775 20:30:47 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:22.775 20:30:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:22.775 20:30:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:22.775 20:30:47 -- common/autotest_common.sh@10 -- # set +x 00:32:28.056 INFO: APP EXITING 00:32:28.056 INFO: killing all VMs 00:32:28.056 INFO: killing vhost app 00:32:28.056 WARN: no vhost pid file found 00:32:28.056 INFO: EXIT DONE 00:32:30.588 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:32:30.588 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:30.588 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:33.878 Cleaning 00:32:33.879 Removing: /var/run/dpdk/spdk0/config 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:33.879 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:33.879 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:33.879 Removing: /var/run/dpdk/spdk1/config 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:33.879 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:33.879 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:33.879 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:33.879 Removing: /var/run/dpdk/spdk2/config 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:33.879 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:33.879 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:33.879 Removing: /var/run/dpdk/spdk3/config 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:33.879 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:33.879 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:33.879 Removing: /var/run/dpdk/spdk4/config 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:33.879 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:33.879 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:33.879 Removing: /dev/shm/bdev_svc_trace.1 00:32:33.879 Removing: /dev/shm/nvmf_trace.0 00:32:33.879 Removing: /dev/shm/spdk_tgt_trace.pid4034270 00:32:33.879 Removing: /var/run/dpdk/spdk0 00:32:33.879 Removing: /var/run/dpdk/spdk1 00:32:33.879 Removing: /var/run/dpdk/spdk2 00:32:33.879 Removing: /var/run/dpdk/spdk3 00:32:33.879 Removing: /var/run/dpdk/spdk4 00:32:33.879 Removing: /var/run/dpdk/spdk_pid102600 00:32:33.879 Removing: /var/run/dpdk/spdk_pid10579 00:32:33.879 Removing: /var/run/dpdk/spdk_pid107055 00:32:33.879 Removing: /var/run/dpdk/spdk_pid113707 00:32:33.879 Removing: /var/run/dpdk/spdk_pid115163 00:32:33.879 Removing: /var/run/dpdk/spdk_pid116768 00:32:33.879 Removing: /var/run/dpdk/spdk_pid121225 00:32:33.879 Removing: /var/run/dpdk/spdk_pid125258 00:32:33.879 Removing: /var/run/dpdk/spdk_pid12605 00:32:33.879 Removing: /var/run/dpdk/spdk_pid1294 00:32:33.879 Removing: /var/run/dpdk/spdk_pid133050 00:32:33.879 Removing: /var/run/dpdk/spdk_pid133067 00:32:33.879 Removing: /var/run/dpdk/spdk_pid13515 00:32:33.879 Removing: /var/run/dpdk/spdk_pid138017 00:32:33.879 Removing: /var/run/dpdk/spdk_pid138174 00:32:33.879 Removing: /var/run/dpdk/spdk_pid138378 00:32:33.879 Removing: /var/run/dpdk/spdk_pid138903 00:32:33.879 Removing: /var/run/dpdk/spdk_pid138910 00:32:33.879 Removing: /var/run/dpdk/spdk_pid143706 00:32:33.879 Removing: /var/run/dpdk/spdk_pid144308 00:32:33.879 Removing: /var/run/dpdk/spdk_pid149009 00:32:33.879 Removing: /var/run/dpdk/spdk_pid151891 00:32:33.879 Removing: /var/run/dpdk/spdk_pid157540 00:32:33.879 Removing: /var/run/dpdk/spdk_pid163746 00:32:33.879 Removing: /var/run/dpdk/spdk_pid173445 00:32:33.879 Removing: /var/run/dpdk/spdk_pid180470 00:32:33.879 Removing: /var/run/dpdk/spdk_pid180497 00:32:33.879 Removing: /var/run/dpdk/spdk_pid200219 00:32:33.879 Removing: /var/run/dpdk/spdk_pid200977 00:32:33.879 Removing: /var/run/dpdk/spdk_pid201540 00:32:33.879 Removing: /var/run/dpdk/spdk_pid202333 00:32:33.879 Removing: /var/run/dpdk/spdk_pid203431 00:32:33.879 Removing: /var/run/dpdk/spdk_pid203973 00:32:33.879 Removing: /var/run/dpdk/spdk_pid204516 00:32:33.879 Removing: /var/run/dpdk/spdk_pid205255 00:32:33.879 Removing: /var/run/dpdk/spdk_pid209870 00:32:33.879 Removing: /var/run/dpdk/spdk_pid210358 00:32:33.879 Removing: /var/run/dpdk/spdk_pid216490 00:32:33.879 Removing: /var/run/dpdk/spdk_pid216782 00:32:33.879 Removing: /var/run/dpdk/spdk_pid219281 00:32:33.879 Removing: /var/run/dpdk/spdk_pid227060 00:32:33.879 Removing: /var/run/dpdk/spdk_pid227193 00:32:33.879 Removing: /var/run/dpdk/spdk_pid232404 00:32:33.879 Removing: /var/run/dpdk/spdk_pid234580 00:32:33.879 Removing: /var/run/dpdk/spdk_pid236657 00:32:33.879 Removing: /var/run/dpdk/spdk_pid237968 00:32:33.879 Removing: /var/run/dpdk/spdk_pid240001 00:32:33.879 Removing: /var/run/dpdk/spdk_pid241328 00:32:33.879 Removing: /var/run/dpdk/spdk_pid250412 00:32:33.879 Removing: /var/run/dpdk/spdk_pid250934 00:32:33.879 Removing: /var/run/dpdk/spdk_pid251461 00:32:33.879 Removing: /var/run/dpdk/spdk_pid254260 00:32:33.879 Removing: /var/run/dpdk/spdk_pid254960 00:32:33.879 Removing: /var/run/dpdk/spdk_pid255488 00:32:33.879 Removing: /var/run/dpdk/spdk_pid259505 00:32:33.879 Removing: /var/run/dpdk/spdk_pid259608 00:32:33.879 Removing: /var/run/dpdk/spdk_pid261581 00:32:33.879 Removing: /var/run/dpdk/spdk_pid262190 00:32:33.879 Removing: /var/run/dpdk/spdk_pid262287 00:32:33.879 Removing: /var/run/dpdk/spdk_pid31870 00:32:33.879 Removing: /var/run/dpdk/spdk_pid35733 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4031842 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4033051 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4034270 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4034973 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4036042 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4036080 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4037165 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4037430 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4037800 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4039502 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4040934 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4041245 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4041554 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4041896 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4042235 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4042505 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4042785 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4043095 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4043663 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4047036 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4047324 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4047477 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4047630 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4048188 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4048367 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4048792 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4049023 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4049334 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4049394 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4049630 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4049894 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4050511 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4050792 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4051109 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4051421 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4051451 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4051727 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4051999 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4052305 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4052587 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4052866 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4053154 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4053435 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4053729 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4054017 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4054299 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4054576 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4054864 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4055141 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4055426 00:32:33.879 Removing: /var/run/dpdk/spdk_pid4055706 00:32:33.880 Removing: /var/run/dpdk/spdk_pid4055991 00:32:33.880 Removing: /var/run/dpdk/spdk_pid4056269 00:32:33.880 Removing: /var/run/dpdk/spdk_pid4056553 00:32:33.880 Removing: /var/run/dpdk/spdk_pid4056839 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4057121 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4057407 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4057476 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4057815 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4061667 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4109121 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4113658 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4125390 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4130813 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4135086 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4135700 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4142404 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4149187 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4149191 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4150104 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4151030 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4152071 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4152857 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4152859 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4153130 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4153344 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4153391 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4154269 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4155232 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4156271 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4156807 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4156816 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4157173 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4158653 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4159831 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4169027 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4169530 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4174109 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4180491 00:32:34.138 Removing: /var/run/dpdk/spdk_pid4183885 00:32:34.138 Removing: /var/run/dpdk/spdk_pid70862 00:32:34.138 Removing: /var/run/dpdk/spdk_pid75711 00:32:34.138 Removing: /var/run/dpdk/spdk_pid77532 00:32:34.138 Removing: /var/run/dpdk/spdk_pid79432 00:32:34.138 Removing: /var/run/dpdk/spdk_pid79633 00:32:34.138 Removing: /var/run/dpdk/spdk_pid79690 00:32:34.138 Removing: /var/run/dpdk/spdk_pid79910 00:32:34.138 Removing: /var/run/dpdk/spdk_pid80473 00:32:34.138 Removing: /var/run/dpdk/spdk_pid82319 00:32:34.138 Removing: /var/run/dpdk/spdk_pid83176 00:32:34.138 Removing: /var/run/dpdk/spdk_pid83734 00:32:34.138 Removing: /var/run/dpdk/spdk_pid86384 00:32:34.138 Removing: /var/run/dpdk/spdk_pid86949 00:32:34.138 Removing: /var/run/dpdk/spdk_pid87768 00:32:34.138 Removing: /var/run/dpdk/spdk_pid92087 00:32:34.138 Clean 00:32:34.397 20:30:59 -- common/autotest_common.sh@1451 -- # return 0 00:32:34.397 20:30:59 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:34.397 20:30:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:34.397 20:30:59 -- common/autotest_common.sh@10 -- # set +x 00:32:34.397 20:30:59 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:34.397 20:30:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:34.397 20:30:59 -- common/autotest_common.sh@10 -- # set +x 00:32:34.397 20:30:59 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:34.397 20:30:59 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:34.397 20:30:59 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:34.397 20:30:59 -- spdk/autotest.sh@391 -- # hash lcov 00:32:34.397 20:30:59 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:34.397 20:30:59 -- spdk/autotest.sh@393 -- # hostname 00:32:34.397 20:30:59 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:34.655 geninfo: WARNING: invalid characters removed from testname! 00:33:06.734 20:31:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.303 20:31:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.596 20:31:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:13.159 20:31:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.447 20:31:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.055 20:31:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.345 20:31:47 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:22.345 20:31:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.345 20:31:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:22.345 20:31:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.345 20:31:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.345 20:31:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.345 20:31:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.345 20:31:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.345 20:31:47 -- paths/export.sh@5 -- $ export PATH 00:33:22.345 20:31:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.345 20:31:47 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:22.345 20:31:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:22.345 20:31:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721068307.XXXXXX 00:33:22.346 20:31:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721068307.P8kVTv 00:33:22.346 20:31:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:22.346 20:31:47 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:22.346 20:31:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:22.346 20:31:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:22.346 20:31:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:22.346 20:31:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:22.346 20:31:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:22.346 20:31:47 -- common/autotest_common.sh@10 -- $ set +x 00:33:22.346 20:31:47 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:22.346 20:31:47 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:22.346 20:31:47 -- pm/common@17 -- $ local monitor 00:33:22.346 20:31:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.346 20:31:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.346 20:31:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.346 20:31:47 -- pm/common@21 -- $ date +%s 00:33:22.346 20:31:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.346 20:31:47 -- pm/common@21 -- $ date +%s 00:33:22.346 20:31:47 -- pm/common@21 -- $ date +%s 00:33:22.346 20:31:47 -- pm/common@25 -- $ sleep 1 00:33:22.346 20:31:47 -- pm/common@21 -- $ date +%s 00:33:22.346 20:31:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068307 00:33:22.346 20:31:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068307 00:33:22.346 20:31:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068307 00:33:22.346 20:31:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721068307 00:33:22.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068307_collect-cpu-temp.pm.log 00:33:22.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068307_collect-vmstat.pm.log 00:33:22.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068307_collect-cpu-load.pm.log 00:33:22.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721068307_collect-bmc-pm.bmc.pm.log 00:33:22.915 20:31:48 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:22.915 20:31:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:33:22.915 20:31:48 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:22.915 20:31:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:22.915 20:31:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:22.915 20:31:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:22.915 20:31:48 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:22.915 20:31:48 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:22.916 20:31:48 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:22.916 20:31:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:22.916 20:31:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:22.916 20:31:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:22.916 20:31:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:22.916 20:31:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.916 20:31:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:22.916 20:31:48 -- pm/common@44 -- $ pid=273195 00:33:22.916 20:31:48 -- pm/common@50 -- $ kill -TERM 273195 00:33:22.916 20:31:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.916 20:31:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:22.916 20:31:48 -- pm/common@44 -- $ pid=273197 00:33:22.916 20:31:48 -- pm/common@50 -- $ kill -TERM 273197 00:33:22.916 20:31:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.916 20:31:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:22.916 20:31:48 -- pm/common@44 -- $ pid=273199 00:33:22.916 20:31:48 -- pm/common@50 -- $ kill -TERM 273199 00:33:22.916 20:31:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:22.916 20:31:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:22.916 20:31:48 -- pm/common@44 -- $ pid=273224 00:33:22.916 20:31:48 -- pm/common@50 -- $ sudo -E kill -TERM 273224 00:33:22.916 + [[ -n 3920678 ]] 00:33:22.916 + sudo kill 3920678 00:33:23.187 [Pipeline] } 00:33:23.207 [Pipeline] // stage 00:33:23.213 [Pipeline] } 00:33:23.231 [Pipeline] // timeout 00:33:23.235 [Pipeline] } 00:33:23.251 [Pipeline] // catchError 00:33:23.258 [Pipeline] } 00:33:23.277 [Pipeline] // wrap 00:33:23.284 [Pipeline] } 00:33:23.302 [Pipeline] // catchError 00:33:23.312 [Pipeline] stage 00:33:23.314 [Pipeline] { (Epilogue) 00:33:23.329 [Pipeline] catchError 00:33:23.332 [Pipeline] { 00:33:23.350 [Pipeline] echo 00:33:23.352 Cleanup processes 00:33:23.359 [Pipeline] sh 00:33:23.644 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:23.644 273333 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:23.644 273642 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:23.662 [Pipeline] sh 00:33:23.946 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:23.946 ++ grep -v 'sudo pgrep' 00:33:23.946 ++ awk '{print $1}' 00:33:23.946 + sudo kill -9 273333 00:33:23.959 [Pipeline] sh 00:33:24.243 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:39.155 [Pipeline] sh 00:33:39.438 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:39.438 Artifacts sizes are good 00:33:39.452 [Pipeline] archiveArtifacts 00:33:39.460 Archiving artifacts 00:33:39.608 [Pipeline] sh 00:33:39.891 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:39.907 [Pipeline] cleanWs 00:33:39.917 [WS-CLEANUP] Deleting project workspace... 00:33:39.918 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.924 [WS-CLEANUP] done 00:33:39.926 [Pipeline] } 00:33:39.948 [Pipeline] // catchError 00:33:39.963 [Pipeline] sh 00:33:40.265 + logger -p user.info -t JENKINS-CI 00:33:40.274 [Pipeline] } 00:33:40.290 [Pipeline] // stage 00:33:40.295 [Pipeline] } 00:33:40.313 [Pipeline] // node 00:33:40.318 [Pipeline] End of Pipeline 00:33:40.403 Finished: SUCCESS